From: "Uwe Kleine-König" <u.kleine-koenig@pengutronix.de>
To: linux-rt-users@vger.kernel.org
Subject: tick_sched_timer taking much time on arm9/3.10-rt?
Date: Wed, 29 Jan 2014 16:49:39 +0100 [thread overview]
Message-ID: <20140129154939.GQ26766@pengutronix.de> (raw)
[-- Attachment #1: Type: text/plain, Size: 36556 bytes --]
Hello,
the tests were done on a 450 MHz ARM9 (Freescale i.MX28) with
while :; do hackbench; done
for i in $(seq -w 1 20); do cyclictest -m -n -p99 -l100000 -h800 -q > /tmp/bench_$i.hist; done
on 3.10.27-rt25.
The first diagram is what I got with NO_HZ + NO_HZ_IDLE, the second and third
is HZ_PERIODIC + !NO_HZ. On the first diagram you can clearly see two
mountains, with the smaller one being more or less a small copy of the
bigger. On the second diagram this effect isn't that clear, but still it's
there; the third shows it again more clearly. (The first was taken on an
earlier kernel version, but I saw the same effect on 3.10.27-rt25, just
didn't keep the histograms.)
I used the default interval for cyclictest, i.e. 1000 us. So with
HZ=100 there expire 10 timers in cyclictest per jiffie update. In 3 of
20 histograms I see this effect with the two mountains. One of them
is the 2nd diagram in the attachment, the others are similar. The two
peeks are at:
22161 samples @ [131, 135] us
1963 samples @ [220, 224] us
This corresponds approximately to the 1:10 ratio between cyclictest
timers and jiffie updates.
Increasing HZ to 500 (and so 1:2 ratio between jiffie updates and timer
expiries) the two mountains are approximately of equal height, peeks at:
13025 samples @ [132, 136] us
15007 samples @ [230, 234] us
see third diagram. The time between the peeks seems to be more or less
identical, so that it doesn't scale with HZ. But the height of the
mountains (i.e. the count) roughly matches the 1:2 ratio.
The explanation I have that not all histograms show this effect---they
show just one such mountain with at most very few cycles with bigger
times---is that it only happens when the jiffies update and the
cyclictest timer expiry happen concurrently enough. The phase
shift between jiffie update and cyclictest timer keeps stable over a
long time because HZ is a divider of the cyclictest frequency. So it
seems reasonable that you either get a smaller mountain with a size
corresponding to HZ or no 2nd mountain at all.
I created a trace with cyclictest -b (and a kernel built with ftrace on,
so the numbers are bigger) and the relevant section is available below,
(tell me if you want to see more context). What you can see is that
after the irq corresponding to cyclictest's timer expiry the following
things happen apart from the normal schedule stuff:
tick_sched_timer taking 604 us
another irq triggers taking 261 us
I was surprised to see tick_sched_timer taking so much time, but I fail
to see something pathological in the trace (apart from the long time of
course). Also I expected an ARM9 with 450 MHz to show better latencies.
Comparing with today's measurement on the i.MX27 (ARM9, 400 MHz) in the
OSADL lab
(https://www.osadl.org/Latency-plot-of-system-in-rack-2-slot.qa-latencyplot-r2s1.0.html,
01/29/14 13:23:12) the first two peeks have approximately the same
distance as on my i.MX28. Then the difference is I don't have latencies
that low, it starts at ~80 us for me, but at ~50 on the system in the
test lab. But maybe this is not a valid comparison as the i.MX27 runs
2.6.33-rt which is very different from 3.10-rt.
Did someone else see similar results? As I don't see the effect on all
histograms, maybe you just tested once for a long time and were lucky
not to hit this case? Are you less surprised or do you see the culprit
somewhere else? Do you have further debugging ideas?
Thanks for your time
Uwe
cyclicte-623 0....1.. 19616215us+: SyS_clock_nanosleep <-ret_fast_syscall
cyclicte-623 0....1.. 19616223us+: clockid_to_kclock <-SyS_clock_nanosleep
cyclicte-623 0....1.. 19616230us+: common_nsleep <-SyS_clock_nanosleep
cyclicte-623 0....1.. 19616236us+: hrtimer_nanosleep <-common_nsleep
cyclicte-623 0....1.. 19616242us+: hrtimer_init <-hrtimer_nanosleep
cyclicte-623 0....1.. 19616250us+: ktime_add_safe <-hrtimer_nanosleep
cyclicte-623 0....1.. 19616256us+: do_nanosleep <-hrtimer_nanosleep
cyclicte-623 0....1.. 19616263us+: hrtimer_start_range_ns <-do_nanosleep
cyclicte-623 0....1.. 19616268us+: __hrtimer_start_range_ns <-hrtimer_start_range_ns
cyclicte-623 0d...1.. 19616274us+: add_preempt_count <-__hrtimer_start_range_ns
cyclicte-623 0d...2.. 19616280us+: ktime_get <-__hrtimer_start_range_ns
cyclicte-623 0d...2.. 19616287us+: clocksource_mmio_readl_down <-ktime_get
cyclicte-623 0d...2.. 19616295us+: enqueue_hrtimer <-__hrtimer_start_range_ns
cyclicte-623 0d...2.. 19616302us+: hrtimer_reprogram <-__hrtimer_start_range_ns
cyclicte-623 0d...2.. 19616309us+: tick_program_event <-hrtimer_reprogram
cyclicte-623 0d...2.. 19616314us+: clockevents_program_event <-tick_program_event
cyclicte-623 0d...2.. 19616321us+: ktime_get <-clockevents_program_event
cyclicte-623 0d...2.. 19616327us+: clocksource_mmio_readl_down <-ktime_get
cyclicte-623 0d...4.. 19616340us+: add_preempt_count <-ring_buffer_lock_reserve
cyclicte-623 0d...5.. 19616346us+: sub_preempt_count <-ring_buffer_lock_reserve
---------------------- cyclictest sets up the next wakeup and schedules
cyclicte-623 0d...1.. 19616351us+: clockevents_program_event: delta = 487041, force = 0
cyclicte-623 0d...4.. 19616366us+: add_preempt_count <-ring_buffer_lock_reserve
cyclicte-623 0d...5.. 19616372us+: sub_preempt_count <-ring_buffer_lock_reserve
cyclicte-623 0d...1.. 19616376us+: clockevents_program_event: delta = 487041, clc = 11688
cyclicte-623 0d...2.. 19616387us+: timrotv2_set_next_event <-clockevents_program_event
cyclicte-623 0....2.. 19616394us+: sub_preempt_count <-__hrtimer_start_range_ns
cyclicte-623 0....1.. 19616400us+: schedule <-do_nanosleep
cyclicte-623 0....1.. 19616407us+: __schedule <-schedule
cyclicte-623 0....1.. 19616413us+: add_preempt_count <-__schedule
cyclicte-623 0....2.. 19616419us+: rcu_note_context_switch <-__schedule
cyclicte-623 0d...2.. 19616426us+: add_preempt_count <-__schedule
cyclicte-623 0d...3.. 19616432us+: deactivate_task <-__schedule
cyclicte-623 0d...3.. 19616438us+: dequeue_task <-deactivate_task
cyclicte-623 0d...3.. 19616443us+: update_rq_clock <-dequeue_task
cyclicte-623 0d...3.. 19616451us+: dequeue_task_rt <-dequeue_task
cyclicte-623 0d...3.. 19616457us+: update_curr_rt <-dequeue_task_rt
cyclicte-623 0d...3.. 19616463us+: cpuacct_charge <-update_curr_rt
cyclicte-623 0d...3.. 19616469us+: __rcu_read_lock <-cpuacct_charge
cyclicte-623 0d...3.. 19616476us+: __rcu_read_unlock <-cpuacct_charge
cyclicte-623 0d...3.. 19616483us+: add_preempt_count <-update_curr_rt
cyclicte-623 0d...4.. 19616489us+: sub_preempt_count <-update_curr_rt
cyclicte-623 0d...3.. 19616495us+: dequeue_rt_stack <-dequeue_task_rt
cyclicte-623 0d...3.. 19616502us+: put_prev_task_rt <-__schedule
cyclicte-623 0d...3.. 19616508us+: update_curr_rt <-put_prev_task_rt
cyclicte-623 0d...3.. 19616515us+: pick_next_task_stop <-__schedule
cyclicte-623 0d...3.. 19616520us+: pick_next_task_rt <-__schedule
cyclicte-623 0d...3.. 19616537us+: atomic_notifier_call_chain <-__switch_to
cyclicte-623 0d...3.. 19616551us+: __atomic_notifier_call_chain <-atomic_notifier_call_chain
cyclicte-623 0d...3.. 19616558us+: __rcu_read_lock <-__atomic_notifier_call_chain
cyclicte-623 0d...3.. 19616565us+: notifier_call_chain <-__atomic_notifier_call_chain
cyclicte-623 0d...3.. 19616571us+: __rcu_read_unlock <-__atomic_notifier_call_chain
----------------------
irq/215--41 0d...3.. 19616578us+: finish_task_switch.constprop.87 <-__schedule
irq/215--41 0....3.. 19616585us+: sub_preempt_count <-finish_task_switch.constprop.87
irq/215--41 0....2.. 19616591us+: sub_preempt_count <-__schedule
irq/215--41 0....1.. 19616598us+: kthread_should_stop <-irq_thread
irq/215--41 0....1.. 19616605us+: irq_forced_thread_fn <-irq_thread
irq/215--41 0....1.. 19616611us+: local_bh_disable <-irq_forced_thread_fn
irq/215--41 0....1.. 19616616us+: mxs_auart_irq_handle <-irq_forced_thread_fn
irq/215--41 0....1.. 19616624us+: mxs_auart_tx_chars <-mxs_auart_irq_handle
irq/215--41 0....1.. 19616644us+: uart_write_wakeup <-mxs_auart_tx_chars
irq/215--41 0....1.. 19616651us+: tty_wakeup <-uart_write_wakeup
irq/215--41 0....1.. 19616657us+: __wake_up <-tty_wakeup
irq/215--41 0....1.. 19616663us+: rt_spin_lock <-__wake_up
irq/215--41 0....1.. 19616669us+: rt_spin_lock_slowlock <-rt_spin_lock
irq/215--41 0....1.. 19616675us+: add_preempt_count <-rt_spin_lock_slowlock
irq/215--41 0....2.. 19616682us+: __try_to_take_rt_mutex <-rt_spin_lock_slowlock
irq/215--41 0....2.. 19616689us+: sub_preempt_count <-rt_spin_lock_slowlock
irq/215--41 0....1.. 19616696us+: __wake_up_common <-__wake_up
irq/215--41 0....1.. 19616702us+: default_wake_function <-__wake_up_common
irq/215--41 0....1.. 19616707us+: try_to_wake_up <-default_wake_function
irq/215--41 0d...1.. 19616713us+: add_preempt_count <-try_to_wake_up
irq/215--41 0....2.. 19616720us+: sub_preempt_count <-try_to_wake_up
irq/215--41 0....1.. 19616727us+: rt_spin_unlock <-__wake_up
irq/215--41 0....1.. 19616733us+: rt_spin_lock_slowunlock <-rt_spin_unlock
irq/215--41 0....1.. 19616738us+: add_preempt_count <-rt_spin_lock_slowunlock
irq/215--41 0....2.. 19616744us+: __rt_spin_lock_slowunlock <-rt_spin_lock_slowunlock
irq/215--41 0....2.. 19616750us+: sub_preempt_count <-__rt_spin_lock_slowunlock
irq/215--41 0....1.. 19616758us+: irq_finalize_oneshot <-irq_forced_thread_fn
irq/215--41 0d...1.. 19616764us+: add_preempt_count <-irq_finalize_oneshot
irq/215--41 0d...2.. 19616771us+: unmask_irq <-irq_finalize_oneshot
irq/215--41 0d...2.. 19616778us+: icoll_unmask_irq <-unmask_irq
irq/215--41 0....2.. 19616785us+: sub_preempt_count <-irq_finalize_oneshot
irq/215--41 0....1.. 19616792us+: local_bh_enable <-irq_forced_thread_fn
irq/215--41 0....1.. 19616799us+: note_interrupt <-irq_thread
irq/215--41 0....1.. 19616807us+: add_interrupt_randomness <-irq_thread
irq/215--41 0....1.. 19616813us+: read_current_timer <-add_interrupt_randomness
irq/215--41 0....1.. 19616822us+: wake_threads_waitq <-irq_thread
irq/215--41 0....1.. 19616828us+: kthread_should_stop <-irq_thread
irq/215--41 0....1.. 19616833us+: schedule <-irq_thread
irq/215--41 0....1.. 19616840us+: __schedule <-schedule
irq/215--41 0....1.. 19616846us+: add_preempt_count <-__schedule
irq/215--41 0....2.. 19616853us+: rcu_note_context_switch <-__schedule
irq/215--41 0d...2.. 19616859us+: add_preempt_count <-__schedule
irq/215--41 0d...3.. 19616864us+: deactivate_task <-__schedule
irq/215--41 0d...3.. 19616870us+: dequeue_task <-deactivate_task
irq/215--41 0d...3.. 19616876us+: update_rq_clock <-dequeue_task
irq/215--41 0d...3.. 19616883us+: dequeue_task_rt <-dequeue_task
irq/215--41 0d...3.. 19616889us+: update_curr_rt <-dequeue_task_rt
irq/215--41 0d...3.. 19616895us+: cpuacct_charge <-update_curr_rt
irq/215--41 0d...3.. 19616900us+: __rcu_read_lock <-cpuacct_charge
irq/215--41 0d...3.. 19616906us+: __rcu_read_unlock <-cpuacct_charge
irq/215--41 0d...3.. 19616912us+: add_preempt_count <-update_curr_rt
irq/215--41 0d...4.. 19616918us+: sub_preempt_count <-update_curr_rt
irq/215--41 0d...3.. 19616924us+: dequeue_rt_stack <-dequeue_task_rt
irq/215--41 0d...3.. 19616931us+: put_prev_task_rt <-__schedule
irq/215--41 0d...3.. 19616937us+: update_curr_rt <-put_prev_task_rt
irq/215--41 0d...3.. 19616942us+: pick_next_task_fair <-__schedule
irq/215--41 0d...3.. 19616949us+: clear_buddies <-pick_next_task_fair
irq/215--41 0d...3.. 19616956us+: __dequeue_entity <-pick_next_task_fair
irq/215--41 0d...3.. 19616963us+: clear_buddies <-pick_next_task_fair
irq/215--41 0d...3.. 19616969us+: __dequeue_entity <-pick_next_task_fair
irq/215--41 0d...3.. 19616976us+: clear_buddies <-pick_next_task_fair
irq/215--41 0d...3.. 19616981us+: __dequeue_entity <-pick_next_task_fair
irq/215--41 0d...3.. 19616988us+: clear_buddies <-pick_next_task_fair
irq/215--41 0d...3.. 19616993us+: __dequeue_entity <-pick_next_task_fair
irq/215--41 0d...3.. 19617011us+: atomic_notifier_call_chain <-__switch_to
irq/215--41 0d...3.. 19617018us+: __atomic_notifier_call_chain <-atomic_notifier_call_chain
irq/215--41 0d...3.. 19617023us+: __rcu_read_lock <-__atomic_notifier_call_chain
irq/215--41 0d...3.. 19617029us+: notifier_call_chain <-__atomic_notifier_call_chain
irq/215--41 0d...3.. 19617035us+: __rcu_read_unlock <-__atomic_notifier_call_chain
hackbenc-536 0d...3.. 19617043us+: finish_task_switch.constprop.87 <-__schedule
----------------------- timer irq triggers
hackbenc-536 0d...3.. 19617052us+: icoll_handle_irq <-__irq_svc
hackbenc-536 0d...3.. 19617059us+: irq_find_mapping <-icoll_handle_irq
hackbenc-536 0d...3.. 19617067us+: irq_linear_revmap <-irq_find_mapping
hackbenc-536 0d...3.. 19617073us+: handle_IRQ <-icoll_handle_irq
hackbenc-536 0d...3.. 19617079us+: irq_enter <-handle_IRQ
hackbenc-536 0d...3.. 19617084us+: rcu_irq_enter <-irq_enter
hackbenc-536 0d...3.. 19617092us+: add_preempt_count <-irq_enter
hackbenc-536 0d..h3.. 19617099us+: generic_handle_irq <-handle_IRQ
hackbenc-536 0d..h3.. 19617105us+: irq_to_desc <-generic_handle_irq
hackbenc-536 0d..h3.. 19617113us+: handle_level_irq <-generic_handle_irq
hackbenc-536 0d..h3.. 19617119us+: add_preempt_count <-handle_level_irq
hackbenc-536 0d..h4.. 19617126us+: icoll_mask_irq <-handle_level_irq
hackbenc-536 0d..h4.. 19617133us+: icoll_ack_irq <-handle_level_irq
hackbenc-536 0d..h4.. 19617140us+: handle_irq_event <-handle_level_irq
hackbenc-536 0d..h4.. 19617146us+: sub_preempt_count <-handle_irq_event
hackbenc-536 0d..h3.. 19617152us+: handle_irq_event_percpu <-handle_irq_event
hackbenc-536 0d..h3.. 19617159us+: mxs_timer_interrupt <-handle_irq_event_percpu
hackbenc-536 0d..h3.. 19617166us+: hrtimer_interrupt <-mxs_timer_interrupt
hackbenc-536 0d..h3.. 19617173us+: add_preempt_count <-hrtimer_interrupt
hackbenc-536 0d..h4.. 19617179us+: ktime_get_update_offsets <-hrtimer_interrupt
hackbenc-536 0d..h4.. 19617186us+: clocksource_mmio_readl_down <-ktime_get_update_offsets
----------------------- normal wakeup
hackbenc-536 0d..h4.. 19617197us+: __run_hrtimer <-hrtimer_interrupt
hackbenc-536 0d..h4.. 19617203us+: __remove_hrtimer <-__run_hrtimer
hackbenc-536 0d..h4.. 19617211us+: sub_preempt_count <-__run_hrtimer
hackbenc-536 0d..h3.. 19617217us+: hrtimer_wakeup <-__run_hrtimer
hackbenc-536 0d..h3.. 19617223us+: wake_up_process <-hrtimer_wakeup
hackbenc-536 0d..h3.. 19617230us+: try_to_wake_up <-wake_up_process
hackbenc-536 0d..h3.. 19617236us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0d..h4.. 19617242us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0d..h5.. 19617248us+: activate_task <-try_to_wake_up
hackbenc-536 0d..h5.. 19617254us+: enqueue_task <-activate_task
hackbenc-536 0d..h5.. 19617260us+: update_rq_clock <-enqueue_task
hackbenc-536 0d..h5.. 19617268us+: enqueue_task_rt <-enqueue_task
hackbenc-536 0d..h5.. 19617274us+: dequeue_rt_stack <-enqueue_task_rt
hackbenc-536 0d..h5.. 19617282us+: ttwu_do_wakeup.constprop.86 <-try_to_wake_up
hackbenc-536 0d..h5.. 19617287us+: check_preempt_curr <-ttwu_do_wakeup.constprop.86
hackbenc-536 0dN.h5.. 19617299us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dN.h4.. 19617307us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dN.h3.. 19617314us+: add_preempt_count <-__run_hrtimer
----------------------- tick_sched_timer (update jiffies) -> 604 us
hackbenc-536 0dN.h4.. 19617321us+: __run_hrtimer <-hrtimer_interrupt
hackbenc-536 0dN.h4.. 19617327us+: __remove_hrtimer <-__run_hrtimer
hackbenc-536 0dN.h4.. 19617336us+: sub_preempt_count <-__run_hrtimer
hackbenc-536 0dN.h3.. 19617343us+: tick_sched_timer <-__run_hrtimer
hackbenc-536 0dN.h3.. 19617350us+: ktime_get <-tick_sched_timer
hackbenc-536 0dN.h3.. 19617357us+: clocksource_mmio_readl_down <-ktime_get
hackbenc-536 0dN.h3.. 19617366us+: tick_sched_do_timer <-tick_sched_timer
hackbenc-536 0dN.h3.. 19617373us+: tick_do_update_jiffies64 <-tick_sched_do_timer
hackbenc-536 0dN.h3.. 19617380us+: add_preempt_count <-tick_do_update_jiffies64
hackbenc-536 0dN.h4.. 19617386us+: add_preempt_count <-tick_do_update_jiffies64
hackbenc-536 0dN.h5.. 19617394us+: do_timer <-tick_do_update_jiffies64
hackbenc-536 0dN.h5.. 19617401us+: add_preempt_count <-do_timer
hackbenc-536 0dN.h6.. 19617408us+: clocksource_mmio_readl_down <-do_timer
hackbenc-536 0dN.h6.. 19617416us+: ntp_tick_length <-do_timer
hackbenc-536 0dN.h6.. 19617427us+: ntp_tick_length <-do_timer
hackbenc-536 0dN.h6.. 19617436us+: add_preempt_count <-do_timer
hackbenc-536 0dN.h7.. 19617444us+: update_pvclock_gtod.constprop.11 <-do_timer
hackbenc-536 0dN.h7.. 19617450us+: raw_notifier_call_chain <-update_pvclock_gtod.constprop.11
hackbenc-536 0dN.h7.. 19617457us+: __raw_notifier_call_chain <-raw_notifier_call_chain
hackbenc-536 0dN.h7.. 19617464us+: notifier_call_chain <-__raw_notifier_call_chain
hackbenc-536 0dN.h7.. 19617471us+: sub_preempt_count <-do_timer
hackbenc-536 0dN.h6.. 19617477us+: sub_preempt_count <-do_timer
hackbenc-536 0dN.h5.. 19617485us+: calc_global_load <-do_timer
hackbenc-536 0dN.h5.. 19617492us+: sub_preempt_count <-tick_do_update_jiffies64
hackbenc-536 0dN.h4.. 19617499us+: sub_preempt_count <-tick_do_update_jiffies64
hackbenc-536 0dN.h3.. 19617505us+: tick_sched_handle <-tick_sched_timer
hackbenc-536 0dN.h3.. 19617512us+: update_process_times <-tick_sched_handle
hackbenc-536 0dN.h3.. 19617519us+: account_process_tick <-update_process_times
hackbenc-536 0dN.h3.. 19617525us+: account_system_time <-account_process_tick
hackbenc-536 0dN.h3.. 19617533us+: in_serving_softirq <-account_system_time
hackbenc-536 0dN.h3.. 19617541us+: cpuacct_account_field <-account_system_time
hackbenc-536 0dN.h3.. 19617547us+: __rcu_read_lock <-cpuacct_account_field
hackbenc-536 0dN.h3.. 19617565us+: __rcu_read_unlock <-cpuacct_account_field
hackbenc-536 0dN.h3.. 19617575us+: acct_account_cputime <-account_system_time
hackbenc-536 0dN.h3.. 19617583us+: __acct_update_integrals <-acct_account_cputime
hackbenc-536 0dN.h3.. 19617591us+: jiffies_to_timeval <-__acct_update_integrals
hackbenc-536 0dN.h3.. 19617600us+: scheduler_tick <-update_process_times
hackbenc-536 0dN.h3.. 19617606us+: add_preempt_count <-scheduler_tick
hackbenc-536 0dN.h4.. 19617613us+: update_rq_clock <-scheduler_tick
hackbenc-536 0dN.h4.. 19617619us+: __update_cpu_load <-scheduler_tick
hackbenc-536 0dN.h4.. 19617628us+: task_tick_fair <-scheduler_tick
hackbenc-536 0dN.h4.. 19617635us+: update_curr <-task_tick_fair
hackbenc-536 0dN.h4.. 19617642us+: update_min_vruntime <-update_curr
hackbenc-536 0dN.h4.. 19617652us+: cpuacct_charge <-update_curr
hackbenc-536 0dN.h4.. 19617659us+: __rcu_read_lock <-cpuacct_charge
hackbenc-536 0dN.h4.. 19617666us+: __rcu_read_unlock <-cpuacct_charge
hackbenc-536 0dN.h4.. 19617673us+: update_cfs_shares <-task_tick_fair
hackbenc-536 0dN.h4.. 19617681us+: sched_slice <-task_tick_fair
hackbenc-536 0dN.h4.. 19617688us+: calc_delta_mine <-sched_slice
hackbenc-536 0dN.h4.. 19617695us+: calc_delta_mine <-sched_slice
hackbenc-536 0dN.h4.. 19617702us+: calc_delta_mine <-sched_slice
hackbenc-536 0dN.h4.. 19617709us+: calc_delta_mine <-sched_slice
hackbenc-536 0dN.h4.. 19617716us+: resched_task_lazy <-task_tick_fair
hackbenc-536 0dNLh4.. 19617722us+: clear_buddies <-task_tick_fair
hackbenc-536 0dNLh4.. 19617729us+: update_curr <-task_tick_fair
hackbenc-536 0dNLh4.. 19617735us+: update_min_vruntime <-update_curr
hackbenc-536 0dNLh4.. 19617742us+: update_cfs_shares <-task_tick_fair
hackbenc-536 0dNLh4.. 19617749us+: update_curr <-task_tick_fair
hackbenc-536 0dNLh4.. 19617756us+: update_min_vruntime <-update_curr
hackbenc-536 0dNLh4.. 19617762us+: update_cfs_shares <-task_tick_fair
hackbenc-536 0dNLh4.. 19617769us+: sched_slice <-task_tick_fair
hackbenc-536 0dNLh4.. 19617775us+: calc_delta_mine <-sched_slice
hackbenc-536 0dNLh4.. 19617781us+: calc_delta_mine <-sched_slice
hackbenc-536 0dNLh4.. 19617788us+: update_curr <-task_tick_fair
hackbenc-536 0dNLh4.. 19617794us+: update_min_vruntime <-update_curr
hackbenc-536 0dNLh4.. 19617800us+: update_cfs_shares <-task_tick_fair
hackbenc-536 0dNLh4.. 19617807us+: sub_preempt_count <-scheduler_tick
hackbenc-536 0dNLh3.. 19617815us+: run_local_timers <-update_process_times
hackbenc-536 0dNLh3.. 19617821us+: hrtimer_run_queues <-run_local_timers
hackbenc-536 0dNLh3.. 19617829us+: raise_softirq <-run_local_timers
hackbenc-536 0dNLh3.. 19617836us+: raise_softirq_irqoff <-raise_softirq
hackbenc-536 0dNLh3.. 19617842us+: do_raise_softirq_irqoff <-raise_softirq_irqoff
hackbenc-536 0dNLh3.. 19617851us+: rcu_check_callbacks <-update_process_times
hackbenc-536 0dNLh3.. 19617862us+: cpu_needs_another_gp <-rcu_check_callbacks
hackbenc-536 0dNLh3.. 19617871us+: cpu_needs_another_gp <-rcu_check_callbacks
hackbenc-536 0dNLh3.. 19617879us+: cpu_needs_another_gp <-rcu_check_callbacks
hackbenc-536 0dNLh3.. 19617887us+: run_posix_cpu_timers <-update_process_times
hackbenc-536 0dNLh3.. 19617895us+: hrtimer_forward <-tick_sched_timer
hackbenc-536 0dNLh3.. 19617903us+: ktime_add_safe <-hrtimer_forward
hackbenc-536 0dNLh3.. 19617911us+: ktime_add_safe <-hrtimer_forward
hackbenc-536 0dNLh3.. 19617919us+: add_preempt_count <-__run_hrtimer
hackbenc-536 0dNLh4.. 19617925us+: enqueue_hrtimer <-__run_hrtimer
-------------------------------------- back to normal scheduling
hackbenc-536 0dNLh4.. 19617937us+: sub_preempt_count <-hrtimer_interrupt
hackbenc-536 0dNLh3.. 19617944us+: tick_program_event <-hrtimer_interrupt
hackbenc-536 0dNLh3.. 19617950us+: clockevents_program_event <-tick_program_event
hackbenc-536 0dNLh3.. 19617957us+: ktime_get <-clockevents_program_event
hackbenc-536 0dNLh3.. 19617964us+: clocksource_mmio_readl_down <-ktime_get
hackbenc-536 0dNLh5.. 19617984us+: add_preempt_count <-ring_buffer_lock_reserve
hackbenc-536 0dNLh6.. 19617992us+: sub_preempt_count <-ring_buffer_lock_reserve
hackbenc-536 0dNLh2.. 19617996us+: clockevents_program_event: delta = 8986933, force = 0
hackbenc-536 0dNLh5.. 19618018us+: add_preempt_count <-ring_buffer_lock_reserve
hackbenc-536 0dNLh6.. 19618025us+: sub_preempt_count <-ring_buffer_lock_reserve
hackbenc-536 0dNLh2.. 19618029us+: clockevents_program_event: delta = 8986933, clc = 215686
hackbenc-536 0dNLh3.. 19618043us+: timrotv2_set_next_event <-clockevents_program_event
hackbenc-536 0dNLh3.. 19618052us+: note_interrupt <-handle_irq_event_percpu
hackbenc-536 0dNLh3.. 19618061us+: add_preempt_count <-handle_irq_event
hackbenc-536 0dNLh4.. 19618070us+: cond_unmask_irq <-handle_level_irq
hackbenc-536 0dNLh4.. 19618077us+: icoll_unmask_irq <-cond_unmask_irq
hackbenc-536 0dNLh4.. 19618084us+: sub_preempt_count <-handle_level_irq
hackbenc-536 0dNLh3.. 19618091us+: irq_exit <-handle_IRQ
hackbenc-536 0dNLh3.. 19618097us+: sub_preempt_count <-irq_exit
hackbenc-536 0dNL.3.. 19618105us+: wakeup_softirqd <-irq_exit
hackbenc-536 0dNL.3.. 19618111us+: wake_up_process <-wakeup_softirqd
hackbenc-536 0dNL.3.. 19618117us+: try_to_wake_up <-wake_up_process
hackbenc-536 0dNL.3.. 19618123us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0dNL.4.. 19618129us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0dNL.5.. 19618136us+: activate_task <-try_to_wake_up
hackbenc-536 0dNL.5.. 19618142us+: enqueue_task <-activate_task
hackbenc-536 0dNL.5.. 19618148us+: update_rq_clock <-enqueue_task
hackbenc-536 0dNL.5.. 19618155us+: enqueue_task_rt <-enqueue_task
hackbenc-536 0dNL.5.. 19618161us+: dequeue_rt_stack <-enqueue_task_rt
hackbenc-536 0dNL.5.. 19618169us+: ttwu_do_wakeup.constprop.86 <-try_to_wake_up
hackbenc-536 0dNL.5.. 19618176us+: check_preempt_curr <-ttwu_do_wakeup.constprop.86
hackbenc-536 0dNL.5.. 19618188us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dNL.4.. 19618195us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dNL.3.. 19618202us+: idle_cpu <-irq_exit
hackbenc-536 0dNL.3.. 19618209us+: rcu_irq_exit <-irq_exit
----------------------- another irq -> 261us
hackbenc-536 0dNL.3.. 19618219us+: icoll_handle_irq <-__irq_svc
hackbenc-536 0dNL.3.. 19618227us+: irq_find_mapping <-icoll_handle_irq
hackbenc-536 0dNL.3.. 19618233us+: irq_linear_revmap <-irq_find_mapping
hackbenc-536 0dNL.3.. 19618240us+: handle_IRQ <-icoll_handle_irq
hackbenc-536 0dNL.3.. 19618247us+: irq_enter <-handle_IRQ
hackbenc-536 0dNL.3.. 19618253us+: rcu_irq_enter <-irq_enter
hackbenc-536 0dNL.3.. 19618260us+: add_preempt_count <-irq_enter
hackbenc-536 0dNLh3.. 19618267us+: generic_handle_irq <-handle_IRQ
hackbenc-536 0dNLh3.. 19618273us+: irq_to_desc <-generic_handle_irq
hackbenc-536 0dNLh3.. 19618281us+: handle_level_irq <-generic_handle_irq
hackbenc-536 0dNLh3.. 19618288us+: add_preempt_count <-handle_level_irq
hackbenc-536 0dNLh4.. 19618295us+: icoll_mask_irq <-handle_level_irq
hackbenc-536 0dNLh4.. 19618302us+: icoll_ack_irq <-handle_level_irq
hackbenc-536 0dNLh4.. 19618310us+: handle_irq_event <-handle_level_irq
hackbenc-536 0dNLh4.. 19618316us+: sub_preempt_count <-handle_irq_event
hackbenc-536 0dNLh3.. 19618324us+: handle_irq_event_percpu <-handle_irq_event
hackbenc-536 0dNLh3.. 19618331us+: irq_default_primary_handler <-handle_irq_event_percpu
hackbenc-536 0dNLh3.. 19618339us+: wake_up_process <-handle_irq_event_percpu
hackbenc-536 0dNLh3.. 19618347us+: try_to_wake_up <-wake_up_process
hackbenc-536 0dNLh3.. 19618353us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0dNLh4.. 19618360us+: add_preempt_count <-try_to_wake_up
hackbenc-536 0dNLh5.. 19618366us+: activate_task <-try_to_wake_up
hackbenc-536 0dNLh5.. 19618372us+: enqueue_task <-activate_task
hackbenc-536 0dNLh5.. 19618378us+: update_rq_clock <-enqueue_task
hackbenc-536 0dNLh5.. 19618384us+: enqueue_task_rt <-enqueue_task
hackbenc-536 0dNLh5.. 19618391us+: dequeue_rt_stack <-enqueue_task_rt
hackbenc-536 0dNLh5.. 19618398us+: ttwu_do_wakeup.constprop.86 <-try_to_wake_up
hackbenc-536 0dNLh5.. 19618405us+: check_preempt_curr <-ttwu_do_wakeup.constprop.86
hackbenc-536 0dNLh5.. 19618414us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dNLh4.. 19618421us+: sub_preempt_count <-try_to_wake_up
hackbenc-536 0dNLh3.. 19618428us+: note_interrupt <-handle_irq_event_percpu
hackbenc-536 0dNLh3.. 19618436us+: add_preempt_count <-handle_irq_event
hackbenc-536 0dNLh4.. 19618442us+: cond_unmask_irq <-handle_level_irq
hackbenc-536 0dNLh4.. 19618448us+: sub_preempt_count <-handle_level_irq
hackbenc-536 0dNLh3.. 19618455us+: irq_exit <-handle_IRQ
hackbenc-536 0dNLh3.. 19618461us+: sub_preempt_count <-irq_exit
hackbenc-536 0dNL.3.. 19618468us+: wakeup_softirqd <-irq_exit
hackbenc-536 0dNL.3.. 19618473us+: idle_cpu <-irq_exit
hackbenc-536 0dNL.3.. 19618480us+: rcu_irq_exit <-irq_exit
----------------------- schedule to cyclictest
hackbenc-536 0.NL.3.. 19618488us+: sub_preempt_count <-finish_task_switch.constprop.87
hackbenc-536 0.NL.2.. 19618496us+: add_preempt_count <-finish_task_switch.constprop.87
hackbenc-536 0.NL.3.. 19618554us+: sub_preempt_count <-finish_task_switch.constprop.87
hackbenc-536 0.NL.2.. 19618579us+: sub_preempt_count <-__schedule
hackbenc-536 0.NL.1.. 19618587us+: add_preempt_count <-__schedule
hackbenc-536 0.NL.2.. 19618593us+: rcu_note_context_switch <-__schedule
hackbenc-536 0dNL.2.. 19618603us+: add_preempt_count <-__schedule
hackbenc-536 0dNL.3.. 19618610us+: update_rq_clock <-__schedule
hackbenc-536 0dNL.3.. 19618617us+: put_prev_task_fair <-__schedule
hackbenc-536 0dNL.3.. 19618623us+: update_curr <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618631us+: __enqueue_entity <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618642us+: update_curr <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618649us+: __enqueue_entity <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618656us+: update_curr <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618663us+: __enqueue_entity <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618670us+: update_curr <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618676us+: __enqueue_entity <-put_prev_task_fair
hackbenc-536 0dNL.3.. 19618683us+: pick_next_task_stop <-__schedule
hackbenc-536 0dNL.3.. 19618690us+: pick_next_task_rt <-__schedule
hackbenc-536 0d...3.. 19618714us+: atomic_notifier_call_chain <-__switch_to
hackbenc-536 0d...3.. 19618730us+: __atomic_notifier_call_chain <-atomic_notifier_call_chain
hackbenc-536 0d...3.. 19618739us+: __rcu_read_lock <-__atomic_notifier_call_chain
hackbenc-536 0d...3.. 19618746us+: notifier_call_chain <-__atomic_notifier_call_chain
hackbenc-536 0d...3.. 19618752us+: __rcu_read_unlock <-__atomic_notifier_call_chain
cyclicte-623 0d...3.. 19618760us+: finish_task_switch.constprop.87 <-__schedule
cyclicte-623 0....3.. 19618768us+: sub_preempt_count <-finish_task_switch.constprop.87
cyclicte-623 0....2.. 19618774us+: add_preempt_count <-finish_task_switch.constprop.87
cyclicte-623 0....3.. 19618810us+: sub_preempt_count <-finish_task_switch.constprop.87
cyclicte-623 0....2.. 19618833us+: sub_preempt_count <-__schedule
cyclicte-623 0....1.. 19618843us+: hrtimer_cancel <-do_nanosleep
cyclicte-623 0....1.. 19618849us+: hrtimer_try_to_cancel <-hrtimer_cancel
cyclicte-623 0d...1.. 19618855us+: add_preempt_count <-hrtimer_try_to_cancel
cyclicte-623 0....2.. 19618863us+: sub_preempt_count <-hrtimer_try_to_cancel
cyclicte-623 0....1.. 19618884us+: SyS_clock_gettime <-ret_fast_syscall
cyclicte-623 0....1.. 19618891us+: clockid_to_kclock <-SyS_clock_gettime
cyclicte-623 0....1.. 19618898us+: posix_ktime_get_ts <-SyS_clock_gettime
cyclicte-623 0....1.. 19618903us+: ktime_get_ts <-posix_ktime_get_ts
cyclicte-623 0....1.. 19618911us!: clocksource_mmio_readl_down <-ktime_get_ts
cyclicte-623 0....1.. 19619032us+: SyS_write <-ret_fast_syscall
cyclicte-623 0....1.. 19619046us+: fget_light <-SyS_write
cyclicte-623 0....1.. 19619053us+: __rcu_read_lock <-fget_light
cyclicte-623 0....1.. 19619060us+: __rcu_read_unlock <-fget_light
cyclicte-623 0....1.. 19619066us+: vfs_write <-SyS_write
cyclicte-623 0....1.. 19619073us+: rw_verify_area <-vfs_write
cyclicte-623 0....1.. 19619082us+: __sb_start_write <-vfs_write
cyclicte-623 0....1.. 19619088us+: add_preempt_count <-__sb_start_write
cyclicte-623 0....2.. 19619095us+: sub_preempt_count <-__sb_start_write
cyclicte-623 0....1.. 19619104us+: get_user_pages_fast <-tracing_mark_write
cyclicte-623 0....1.. 19619110us+: rt_down_read <-get_user_pages_fast
cyclicte-623 0....1.. 19619117us+: __rt_down_read.isra.0 <-rt_down_read
cyclicte-623 0....1.. 19619124us+: rt_mutex_lock <-__rt_down_read.isra.0
cyclicte-623 0....1.. 19619130us+: rt_mutex_slowlock <-rt_mutex_lock
cyclicte-623 0....1.. 19619135us+: add_preempt_count <-rt_mutex_slowlock
cyclicte-623 0....2.. 19619142us+: __try_to_take_rt_mutex <-rt_mutex_slowlock
cyclicte-623 0....2.. 19619150us+: sub_preempt_count <-rt_mutex_slowlock
cyclicte-623 0....1.. 19619157us+: get_user_pages <-get_user_pages_fast
cyclicte-623 0....1.. 19619164us+: __get_user_pages <-get_user_pages
cyclicte-623 0....1.. 19619171us+: __get_user_pages.part.80 <-__get_user_pages
cyclicte-623 0....1.. 19619177us+: find_extend_vma <-__get_user_pages.part.80
cyclicte-623 0....1.. 19619183us+: find_vma <-find_extend_vma
cyclicte-623 0....1.. 19619193us+: _cond_resched <-__get_user_pages.part.80
cyclicte-623 0....1.. 19619200us+: follow_page_mask <-__get_user_pages.part.80
cyclicte-623 0....1.. 19619208us+: rt_spin_lock <-follow_page_mask
cyclicte-623 0....1.. 19619215us+: rt_spin_lock_slowlock <-rt_spin_lock
cyclicte-623 0....1.. 19619221us+: add_preempt_count <-rt_spin_lock_slowlock
cyclicte-623 0....2.. 19619226us+: __try_to_take_rt_mutex <-rt_spin_lock_slowlock
cyclicte-623 0....2.. 19619232us+: sub_preempt_count <-rt_spin_lock_slowlock
cyclicte-623 0....1.. 19619239us+: vm_normal_page <-follow_page_mask
cyclicte-623 0....1.. 19619247us+: mark_page_accessed <-follow_page_mask
cyclicte-623 0....1.. 19619253us+: rt_spin_unlock <-follow_page_mask
cyclicte-623 0....1.. 19619259us+: rt_spin_lock_slowunlock <-rt_spin_unlock
cyclicte-623 0....1.. 19619264us+: add_preempt_count <-rt_spin_lock_slowunlock
cyclicte-623 0....2.. 19619270us+: __rt_spin_lock_slowunlock <-rt_spin_lock_slowunlock
cyclicte-623 0....2.. 19619276us+: sub_preempt_count <-__rt_spin_lock_slowunlock
cyclicte-623 0....1.. 19619283us+: __flush_anon_page <-__get_user_pages.part.80
cyclicte-623 0....1.. 19619314us+: flush_dcache_page <-__get_user_pages.part.80
cyclicte-623 0....1.. 19619332us+: page_mapping <-flush_dcache_page
cyclicte-623 0....1.. 19619339us+: __flush_dcache_page <-flush_dcache_page
cyclicte-623 0....1.. 19619362us+: rt_up_read <-get_user_pages_fast
cyclicte-623 0....1.. 19619380us+: rt_mutex_unlock <-rt_up_read
cyclicte-623 0....1.. 19619387us+: add_preempt_count <-rt_mutex_unlock
cyclicte-623 0....2.. 19619394us+: sub_preempt_count <-rt_mutex_unlock
cyclicte-623 0....1.. 19619402us+: pagefault_disable <-tracing_mark_write
cyclicte-623 0....2.. 19619409us+: add_preempt_count <-ring_buffer_lock_reserve
cyclicte-623 0....3.. 19619414us+: sub_preempt_count <-ring_buffer_lock_reserve
cyclicte-623 0....... 19619418us+: tracing_mark_write: hit latency threshold (2000 > 2097)
cyclicte-623 0....1.. 19619432us+: pagefault_enable <-tracing_mark_write
cyclicte-623 0....1.. 19619438us+: put_page <-tracing_mark_write
cyclicte-623 0....1.. 19619446us+: __fsnotify_parent <-vfs_write
cyclicte-623 0....1.. 19619452us+: fsnotify <-vfs_write
cyclicte-623 0....1.. 19619458us+: __srcu_read_lock <-fsnotify
cyclicte-623 0....1.. 19619464us+: add_preempt_count <-__srcu_read_lock
cyclicte-623 0....2.. 19619470us+: sub_preempt_count <-__srcu_read_lock
cyclicte-623 0....1.. 19619478us+: __srcu_read_unlock <-fsnotify
cyclicte-623 0....1.. 19619485us+: __sb_end_write <-vfs_write
cyclicte-623 0....1.. 19619491us+: add_preempt_count <-__sb_end_write
cyclicte-623 0....2.. 19619497us+: sub_preempt_count <-__sb_end_write
cyclicte-623 0....1.. 19619504us+: fput <-SyS_write
cyclicte-623 0....1.. 19619522us+: SyS_write <-ret_fast_syscall
cyclicte-623 0....1.. 19619529us+: fget_light <-SyS_write
cyclicte-623 0....1.. 19619534us+: __rcu_read_lock <-fget_light
cyclicte-623 0....1.. 19619541us+: __rcu_read_unlock <-fget_light
cyclicte-623 0....1.. 19619547us+: vfs_write <-SyS_write
cyclicte-623 0....1.. 19619554us+: rw_verify_area <-vfs_write
cyclicte-623 0....1.. 19619561us+: __sb_start_write <-vfs_write
cyclicte-623 0....1.. 19619567us+: add_preempt_count <-__sb_start_write
cyclicte-623 0....2.. 19619573us+: sub_preempt_count <-__sb_start_write
cyclicte-623 0....1.. 19619587us+: _mutex_lock <-rb_simple_write
cyclicte-623 0....1.. 19619593us+: rt_mutex_lock <-_mutex_lock
cyclicte-623 0....1.. 19619599us+: rt_mutex_slowlock <-rt_mutex_lock
cyclicte-623 0....1.. 19619605us+: add_preempt_count <-rt_mutex_slowlock
cyclicte-623 0....2.. 19619611us+: __try_to_take_rt_mutex <-rt_mutex_slowlock
cyclicte-623 0....2.. 19619618us : sub_preempt_count <-rt_mutex_slowlock
--
Pengutronix e.K. | Uwe Kleine-König |
Industrial Linux Solutions | http://www.pengutronix.de/ |
[-- Attachment #2: bench-i.mx28-3.10.27-rt25.pdf --]
[-- Type: application/pdf, Size: 18365 bytes --]
reply other threads:[~2014-01-29 15:49 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140129154939.GQ26766@pengutronix.de \
--to=u.kleine-koenig@pengutronix.de \
--cc=linux-rt-users@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).