* Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64
@ 2014-09-25 15:28 Mark Rutland
2014-10-05 5:13 ` Vince Weaver
0 siblings, 1 reply; 5+ messages in thread
From: Mark Rutland @ 2014-09-25 15:28 UTC (permalink / raw)
To: linux-kernel
Cc: will.deacon, Peter Zijlstra, Paul Mackerras, Ingo Molnar,
Arnaldo Carvalho de Melo, Vince Weaver
Hi all,
I've been running Vince's fuzzer (latest HEAD, 8338ecf4a6fb6892) to test some
local perf patches, and in doing so I've encountered a number of lockups on
vanilla v3.17-rc6, on arm, arm64, and x86_64 (I've not tested 32-bit x86).
The two x86_64 logs below show the typical lockup case and a rare stack
overflow that may or may not be related, both recorded from a Core 2 system.
I'm able to trigger lockups on a Haswell system, but as that's my development
machine acquiring logs is a little tricky. Both are possible to trigger within
a minute or two.
On arm64 I'm able to trigger lockups after a number (10+ usually) of minutes,
and I'm able to dump the stack on the RCU stalling CPU so long as it isn't
CPU0. From a prior dump I spotted that the stalled CPU was somewhere in
do_softirq_ownstack, and my hacked in softirq accounting shows it doesn't seem
to be making any progress with softirqs. I haven't yet figured out what it's
doing.
Has anyone seen anything like this, or does anyone have any ideas as to what
might be going on? I didn't spot anything in tip/perf/urgent so I assume these
aren't known issues.
Thanks,
Mark.
Log 1, x86_64 lockup
---->8----
*** perf_fuzzer 0.29-pre *** by Vince Weaver
Linux version 3.17.0-rc6hark-perf-lockup+ x86_64
Processor: Intel 6/23/10
Seeding random number generator with 1411655463
/proc/sys/kernel/perf_event_max_sample_rate currently: 100000/s
/proc/sys/kernel/perf_event_paranoid currently: 1
Logging perf_event_open() failures: no
Running fsync after every syscall: no
To reproduce, try: ./perf_fuzzer -r 1411655463
Pid=2723, sleeping 1s
==================================================
Fuzzing the following syscalls:
mmap perf_event_open close read write ioctl fork prctl poll
*NOT* Fuzzing the following syscalls:
Also attempting the following:
signal-handler-on-overflow busy-instruction-loop accessing-perf-proc-and-sys-files trashing-the-mmap-page
*NOT* attempting the following:
==================================================
... userspace output removed ...
[ 157.280682] NOHZ: local_softirq_pending 100
[ 223.007005] INFO: rcu_sched detected stalls on CPUs/tasks: { 1} (detected by 3, t=21006 jiffies, g=25497, c=25496, q=3)
[ 223.007005] Task dump for CPU 1:
[ 223.007005] accounts-daemon R running task 13712 2237 1 0x10000000
[ 223.007005] ffff8801ae8d3a60 0000000000000082 ffff8800e3df08d0 ffff8801ae8d3fd8
[ 223.007005] 0000000000012900 0000000000012900 ffff8801b64d4f50 ffff8800e3df08d0
[ 223.007005] ffff8801ae8d3b98 00000000003cf95e 0000000000000000 ffff8801ae8d3bc4
[ 223.007005] Call Trace:
[ 223.007005] [<ffffffff81911f34>] schedule+0x24/0x70
[ 223.007005] [<ffffffff81914ebc>] schedule_hrtimeout_range_clock+0xfc/0x140
[ 223.007005] [<ffffffff8109cc40>] ? hrtimer_get_res+0x40/0x40
[ 223.007005] [<ffffffff81914e52>] ? schedule_hrtimeout_range_clock+0x92/0x140
[ 223.007005] [<ffffffff81914f0e>] schedule_hrtimeout_range+0xe/0x10
[ 223.007005] [<ffffffff81168794>] poll_schedule_timeout+0x44/0x60
[ 223.007005] [<ffffffff81169d22>] do_sys_poll+0x422/0x540
[ 223.007005] [<ffffffff8180e6b6>] ? unix_stream_sendmsg+0x3e6/0x420
[ 223.007005] [<ffffffff812908fb>] ? selinux_inode_permission+0x9b/0x150
[ 223.007005] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 223.007005] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 223.007005] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 223.007005] [<ffffffff811600ea>] ? getname_flags+0x4a/0x1a0
[ 223.007005] [<ffffffff8116007d>] ? final_putname+0x1d/0x40
[ 223.007005] [<ffffffff811602f4>] ? putname+0x24/0x40
[ 223.007005] [<ffffffff8116581a>] ? user_path_at_empty+0x5a/0x90
[ 223.007005] [<ffffffff810701c0>] ? wake_up_state+0x10/0x10
[ 223.007005] [<ffffffff81198078>] ? eventfd_read+0x38/0x60
[ 223.007005] [<ffffffff810a1e75>] ? ktime_get_ts64+0x45/0xf0
[ 223.007005] [<ffffffff81169f00>] SyS_poll+0x60/0xf0
[ 223.007005] [<ffffffff81915bd2>] system_call_fastpath+0x16/0x1b
[ 286.012004] INFO: rcu_sched detected stalls on CPUs/tasks: { 1} (detected by 3, t=84007 jiffies, g=25497, c=25496, q=2003)
[ 286.012004] Task dump for CPU 1:
[ 286.012004] accounts-daemon R running task 13712 2237 1 0x10000000
[ 286.012004] ffff8801ae8d3a60 0000000000000082 ffff8800e3df08d0 ffff8801ae8d3fd8
[ 286.012004] 0000000000012900 0000000000012900 ffff8801b64d4f50 ffff8800e3df08d0
[ 286.012004] ffff8801ae8d3b98 00000000003cf95e 0000000000000000 ffff8801ae8d3bc4
[ 286.012004] Call Trace:
[ 286.012004] [<ffffffff81911f34>] schedule+0x24/0x70
[ 286.012004] [<ffffffff81914ebc>] schedule_hrtimeout_range_clock+0xfc/0x140
[ 286.012004] [<ffffffff8109cc40>] ? hrtimer_get_res+0x40/0x40
[ 286.012004] [<ffffffff81914e52>] ? schedule_hrtimeout_range_clock+0x92/0x140
[ 286.012004] [<ffffffff81914f0e>] schedule_hrtimeout_range+0xe/0x10
[ 286.012004] [<ffffffff81168794>] poll_schedule_timeout+0x44/0x60
[ 286.012004] [<ffffffff81169d22>] do_sys_poll+0x422/0x540
[ 286.012004] [<ffffffff8180e6b6>] ? unix_stream_sendmsg+0x3e6/0x420
[ 286.012004] [<ffffffff812908fb>] ? selinux_inode_permission+0x9b/0x150
[ 286.012004] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 286.012004] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 286.012004] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
[ 286.012004] [<ffffffff811600ea>] ? getname_flags+0x4a/0x1a0
[ 286.012004] [<ffffffff8116007d>] ? final_putname+0x1d/0x40
[ 286.012004] [<ffffffff811602f4>] ? putname+0x24/0x40
[ 286.012004] [<ffffffff8116581a>] ? user_path_at_empty+0x5a/0x90
[ 286.012004] [<ffffffff810701c0>] ? wake_up_state+0x10/0x10
[ 286.012004] [<ffffffff81198078>] ? eventfd_read+0x38/0x60
[ 286.012004] [<ffffffff810a1e75>] ? ktime_get_ts64+0x45/0xf0
[ 286.012004] [<ffffffff81169f00>] SyS_poll+0x60/0xf0
[ 286.012004] [<ffffffff81915bd2>] system_call_fastpath+0x16/0x1b
----8<----
Log 2, x86_64 stack overflow
---->8----
[mark@trouble-every-day:~/src/perf_event_tests/fuzzer]% ./perf_fuzzer
*** perf_fuzzer 0.29-pre *** by Vince Weaver
Linux version 3.17.0-rc6hark-perf-lockup+ x86_64
Processor: Intel 6/23/10
Seeding random number generator with 1411654897
/proc/sys/kernel/perf_event_max_sample_rate currently: 100000/s
/proc/sys/kernel/perf_event_paranoid currently: 1
Logging perf_event_open() failures: no
Running fsync after every syscall: no
To reproduce, try: ./perf_fuzzer -r 1411654897
Pid=4076, sleeping 1s
==================================================
Fuzzing the following syscalls:
mmap perf_event_open close read write ioctl fork prctl poll
*NOT* Fuzzing the following syscalls:
Also attempting the following:
signal-handler-on-overflow busy-instruction-loop accessing-perf-proc-and-sys-files trashing-the-mmap-page
*NOT* attempting the following:
==================================================
... userspace output removed ...
[ 317.153501] NOHZ: local_softirq_pending 100
[ 317.174346] hrtimer: interrupt took 16623 ns
[ 333.608592] NOHZ: local_softirq_pending 100
[ 337.204944] NOHZ: local_softirq_pending 100
[ 346.641345] divide error: 0000 [#1] SMP
[ 346.642010] Modules linked in:
[ 346.642010] CPU: 0 PID: 4076 Comm: perf_fuzzer Not tainted 3.17.0-rc6hark-perf-lockup+ #1
[ 346.642010] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
[ 346.642010] task: ffff8801ac449a70 ti: ffff8801ac574000 task.ti: ffff8801ac574000
[ 346.642010] RIP: 0010:[<ffffffff81078bce>] [<ffffffff81078bce>] find_busiest_group+0x28e/0x8a0
[ 346.642010] RSP: 0018:ffff8801ac577760 EFLAGS: 00010006
[ 346.642010] RAX: 00000000000003ff RBX: 0000000000000000 RCX: 00000000ffff8801
[ 346.642010] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
[ 346.642010] RBP: ffff8801ac577890 R08: 0000000000000000 R09: 0000000000000000
[ 346.704010] ------------[ cut here ]------------
[ 346.704017] WARNING: CPU: 2 PID: 5 at arch/x86/kernel/irq_64.c:70 handle_irq+0x141/0x150()
[ 346.704019] do_IRQ(): has overflown the kernel stack (cur:1,sp:ffff8801b653fe88,irq stk top-bottom:ffff8801bed00080-ffff8801bed03fc0,exception stk top-bottom:ffff8801bed04080-ffff8801bed0a000)
[ 346.704021] Modules linked in:
[ 346.704023] CPU: 2 PID: 5 Comm: Not tainted 3.17.0-rc6hark-perf-lockup+ #1
[ 346.704023] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
[ 346.704033] 0000000000000009 ffff8801bed03ea0 ffffffff8190b255 ffff8801bed03ee8
[ 346.704035] ffff8801bed03ed8 ffffffff8104ac28 000000000000001a 000000000000001a
[ 346.704036] 0000000000000061 0000000000000061 0000000000000000 ffff8801bed03f38
[ 346.704037]
----8<----
Log 3, arm64 lockup
---->8----
root@localhost:~/src/perf_event_tests/fuzzer# ./perf_fuzzer -r 1411488270
Using user-specified random seed of 1411488270
*** perf_fuzzer 0.29-pre *** by Vince Weaver
Linux version 3.17.0-rc6+ aarch64
Processor: UNKNOWN 0/0/0
Seeding random number generator with 1411488270
/proc/sys/kernel/perf_event_max_sample_rate currently: 285518974/s
/proc/sys/kernel/perf_event_paranoid currently: 1142898651
Logging perf_event_open() failures: no
Running fsync after every syscall: no
To reproduce, try: ./perf_fuzzer -r 1411488270
Pid=18435, sleeping 1s
==================================================
Fuzzing the following syscalls:
mmap perf_event_open close read write ioctl fork prctl poll
*NOT* Fuzzing the following syscalls:
Also attempting the following:
signal-handler-on-overflow busy-instruction-loop accessing-perf-proc-and-sys-files trashing-the-mmap-page
*NOT* attempting the following:
==================================================
... userspace output removed ...
INFO: rcu_preempt self-detected stall on CPU { 3} (t=2100 jiffies g=73728 c=73727 q=90)
Task dump for CPU 3:
swapper/3 R running task 0 0 1 0x00000002
Call trace:
[<ffffffc000088110>] dump_backtrace+0x0/0x130
[<ffffffc000088250>] show_stack+0x10/0x1c
[<ffffffc0000cf57c>] sched_show_task+0xe0/0x134
[<ffffffc0000d2654>] dump_cpu_task+0x3c/0x4c
[<ffffffc0000f3414>] rcu_dump_cpu_stacks+0x98/0xe8
[<ffffffc0000f68a0>] rcu_check_callbacks+0x4b0/0x8c4
[<ffffffc0000fa43c>] update_process_times+0x48/0x78
[<ffffffc000108b80>] tick_sched_handle.isra.18+0x1c/0x68
[<ffffffc000108c18>] tick_sched_timer+0x4c/0x78
[<ffffffc0000fab44>] __run_hrtimer.isra.33+0x48/0x104
[<ffffffc0000fb210>] hrtimer_interrupt+0x104/0x29c
[<ffffffc00040b0bc>] arch_timer_handler_phys+0x28/0x38
[<ffffffc0000efd5c>] handle_percpu_devid_irq+0x90/0xb8
[<ffffffc0000ebe20>] generic_handle_irq+0x30/0x4c
[<ffffffc0000848b8>] handle_IRQ+0x68/0xe0
[<ffffffc000081290>] gic_handle_irq+0x3c/0x80
Exception stack(0xffffffc975d67bf0 to 0xffffffc975d67d10)
7be0: 75d64000 ffffffc9 006d9dd8 ffffffc0
7c00: 75d67d30 ffffffc9 000af0b8 ffffffc0 000000c0 00000000 0073be00 ffffffc0
7c20: 75d67ce0 ffffffc9 00000101 00000000 0040b168 ffffffc0 00000020 00000000
7c40: 0ccccccd 00000000 006d8a28 ffffffc0 006d8a20 ffffffc0 00000001 00000000
7c60: 00000000 00000000 75d59340 ffffffc9 00000001 00000000 00000068 00000000
7c80: 0000004c 00000000 00000033 00000000 00000019 00000000 00000001 00000000
7ca0: 0000000e 00000000 75d64000 ffffffc9 006d9dd8 ffffffc0 75d64000 ffffffc9
7cc0: 006e2000 ffffffc0 0000001e 00000000 00000000 00000000 00000001 00000000
7ce0: 006d8a28 ffffffc0 00000002 00000000 006d8a20 ffffffc0 75d67d30 ffffffc9
7d00: 000af0a4 ffffffc0 75d67d30 ffffffc9
[<ffffffc000083da0>] el1_irq+0x60/0xd0
[<ffffffc0000af4f4>] irq_exit+0x8c/0xc0
[<ffffffc0000848bc>] handle_IRQ+0x6c/0xe0
[<ffffffc000081290>] gic_handle_irq+0x3c/0x80
Exception stack(0xffffffc975d67e30 to 0xffffffc975d67f50)
7e20: 75d64000 ffffffc9 0073d448 ffffffc0
7e40: 75d67f70 ffffffc9 000852d0 ffffffc0 00000000 00000000 006309d0 ffffffc0
7e60: 75d67f00 ffffffc9 00000001 00000000 00000008 00000000 00000001 00000000
7e80: 000fb068 ffffffc0 00000000 00000000 75d59d08 ffffffc9 00000001 00000000
7ea0: 00000000 00000000 75d59340 ffffffc9 00000001 00000000 00000004 00000000
7ec0: 00000000 00000000 963875a0 0000007f 001aa6c4 ffffffc0 962ff50c 0000007f
7ee0: ebb9a680 0000007f 75d64000 ffffffc9 0073d448 ffffffc0 00508000 ffffffc0
7f00: 0066bc18 ffffffc0 00738387 ffffffc0 00621fa8 ffffffc0 00000001 00000000
7f20: 81172000 00000000 00080330 ffffffc0 80000000 00000040 75d67f70 ffffffc9
7f40: 000852cc ffffffc0 75d67f70 ffffffc9
[<ffffffc000083da0>] el1_irq+0x60/0xd0
[<ffffffc0000debac>] cpu_startup_entry+0xfc/0x154
[<ffffffc00008f814>] secondary_start_kernel+0x110/0x120
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [swapper/3:0]
Modules linked in:
CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.17.0-rc6+ #82
task: ffffffc975d59340 ti: ffffffc975d64000 task.ti: ffffffc975d64000
PC is at __do_softirq+0xac/0x27c
LR is at __do_softirq+0x98/0x27c
pc : [<ffffffc0000af0b8>] lr : [<ffffffc0000af0a4>] pstate: 80000145
sp : ffffffc975d67d30
x29: ffffffc975d67d30 x28: ffffffc0006d8a20
x27: 0000000000000002 x26: ffffffc0006d8a28
x25: 0000000000000001 x24: 0000000000000000
x23: 000000000000001e x22: ffffffc0006e2000
x21: ffffffc975d64000 x20: ffffffc0006d9dd8
x19: ffffffc975d64000 x18: 000000000000000e
x17: 0000000000000001 x16: 0000000000000019
x15: 0000000000000033 x14: 000000000000004c
x13: 0000000000000068 x12: 0000000000000001
x11: ffffffc975d59340 x10: 0000000000000000
x9 : 0000000000000001 x8 : ffffffc0006d8a20
x7 : ffffffc0006d8a28 x6 : 000000000ccccccd
x5 : 0000000000000020 x4 : ffffffc00040b168
x3 : 0000000000000101 x2 : ffffffc975d67ce0
x1 : ffffffc00073be00 x0 : 00000000000000c0
SysRq : Show backtrace of all active CPUs
CPU0:
CPU: 0 PID: 0 Comm: swapper/0 Tainted: G L 3.17.0-rc6+ #82
task: ffffffc0006e8a20 ti: ffffffc0006dc000 task.ti: ffffffc0006dc000
PC is at arch_cpu_idle+0x10/0x18
LR is at arch_cpu_idle+0xc/0x18
pc : [<ffffffc0000852d0>] lr : [<ffffffc0000852cc>] pstate: 60000145
sp : ffffffc0006dff20
x29: ffffffc0006dff20 x28: 0000004080000000
x27: ffffffc000080578 x26: 0000000081172000
x25: 0000000000000001 x24: ffffffc000621fa8
x23: ffffffc000738387 x22: ffffffc00066bc18
x21: ffffffc000508000 x20: ffffffc00073d448
x19: ffffffc0006dc000 x18: 000000000000000d
x17: 0000007fb48d47a0 x16: ffffffc0001975ec
x15: ffffffffffffffed x14: 000000000000000d
x13: fffffffffffffe09 x12: 0000000000000001
x11: ffffffc0006e8a20 x10: 0000000000000000
x9 : 0000000000000001 x8 : ffffffc0006e93e8
x7 : 0000000000000000 x6 : ffffffc000108b2c
x5 : 0000000000000000 x4 : ffffffc0006f7700
x3 : 0000000000000001 x2 : ffffffc0006dfeb0
x1 : ffffffc0006309d0 x0 : 0000000000000000
CPU3:
Processed 0 softirqs recently, 143066 total
Call trace:
[<ffffffc000088110>] dump_backtrace+0x0/0x130
[<ffffffc000088250>] show_stack+0x10/0x1c
[<ffffffc00033d360>] showacpu+0xe4/0x11c
[<ffffffc00010db48>] flush_smp_call_function_queue+0x90/0x180
[<ffffffc00010dd04>] generic_smp_call_function_single_interrupt+0xc/0x18
[<ffffffc00008fc9c>] handle_IPI+0xe4/0x174
[<ffffffc0000812cc>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc975d67bf0 to 0xffffffc975d67d10)
7be0: 75d64000 ffffffc9 006d9dd8 ffffffc0
7c00: 75d67d30 ffffffc9 000af0b8 ffffffc0 000000c0 00000000 0073be00 ffffffc0
7c20: 75d67ce0 ffffffc9 00000101 00000000 0040b168 ffffffc0 00000020 00000000
7c40: 0ccccccd 00000000 006d8a28 ffffffc0 006d8a20 ffffffc0 00000001 00000000
7c60: 00000000 00000000 75d59340 ffffffc9 00000001 00000000 00000068 00000000
7c80: 0000004c 00000000 00000033 00000000 00000019 00000000 00000001 00000000
7ca0: 0000000e 00000000 75d64000 ffffffc9 006d9dd8 ffffffc0 75d64000 ffffffc9
7cc0: 006e2000 ffffffc0 0000001e 00000000 00000000 00000000 00000001 00000000
7ce0: 006d8a28 ffffffc0 00000002 00000000 006d8a20 ffffffc0 75d67d30 ffffffc9
7d00: 000af0a4 ffffffc0 75d67d30 ffffffc9
[<ffffffc000083da0>] el1_irq+0x60/0xd0
[<ffffffc0000af4f4>] irq_exit+0x8c/0xc0
[<ffffffc0000848bc>] handle_IRQ+0x6c/0xe0
[<ffffffc000081290>] gic_handle_irq+0x3c/0x80
Exception stack(0xffffffc975d67e30 to 0xffffffc975d67f50)
7e20: 75d64000 ffffffc9 0073d448 ffffffc0
7e40: 75d67f70 ffffffc9 000852d0 ffffffc0 00000000 00000000 006309d0 ffffffc0
7e60: 75d67f00 ffffffc9 00000001 00000000 00000008 00000000 00000001 00000000
7e80: 000fb068 ffffffc0 00000000 00000000 75d59d08 ffffffc9 00000001 00000000
7ea0: 00000000 00000000 75d59340 ffffffc9 00000001 00000000 00000004 00000000
7ec0: 00000000 00000000 963875a0 0000007f 001aa6c4 ffffffc0 962ff50c 0000007f
7ee0: ebb9a680 0000007f 75d64000 ffffffc9 0073d448 ffffffc0 00508000 ffffffc0
7f00: 0066bc18 ffffffc0 00738387 ffffffc0 00621fa8 ffffffc0 00000001 00000000
7f20: 81172000 00000000 00080330 ffffffc0 80000000 00000040 75d67f70 ffffffc9
7f40: 000852cc ffffffc0 75d67f70 ffffffc9
[<ffffffc000083da0>] el1_irq+0x60/0xd0
[<ffffffc0000debac>] cpu_startup_entry+0xfc/0x154
[<ffffffc00008f814>] secondary_start_kernel+0x110/0x120
----8<----
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64
2014-09-25 15:28 Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64 Mark Rutland
@ 2014-10-05 5:13 ` Vince Weaver
2014-10-06 9:59 ` Mark Rutland
0 siblings, 1 reply; 5+ messages in thread
From: Vince Weaver @ 2014-10-05 5:13 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-kernel, will.deacon, Peter Zijlstra, Paul Mackerras,
Ingo Molnar, Arnaldo Carvalho de Melo
On Thu, 25 Sep 2014, Mark Rutland wrote:
> Log 1, x86_64 lockup
> [ 223.007005] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
> [ 223.007005] [<ffffffff811600ea>] ? getname_flags+0x4a/0x1a0
> [ 223.007005] [<ffffffff8116007d>] ? final_putname+0x1d/0x40
> [ 223.007005] [<ffffffff811602f4>] ? putname+0x24/0x40
> [ 223.007005] [<ffffffff8116581a>] ? user_path_at_empty+0x5a/0x90
> [ 223.007005] [<ffffffff810701c0>] ? wake_up_state+0x10/0x10
> [ 223.007005] [<ffffffff81198078>] ? eventfd_read+0x38/0x60
> [ 223.007005] [<ffffffff810a1e75>] ? ktime_get_ts64+0x45/0xf0
> [ 223.007005] [<ffffffff81169f00>] SyS_poll+0x60/0xf0
I have seen issues similar to this before, where the problem appeared
to be in poll/hrtimer. Never managed to track down anything useful about
the bug.
> Log 2, x86_64 stack overflow
> [ 346.641345] divide error: 0000 [#1] SMP
> [ 346.642010] Modules linked in:
> [ 346.642010] CPU: 0 PID: 4076 Comm: perf_fuzzer Not tainted 3.17.0-rc6hark-perf-lockup+ #1
> [ 346.642010] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
> [ 346.642010] task: ffff8801ac449a70 ti: ffff8801ac574000 task.ti: ffff8801ac574000
> [ 346.642010] RIP: 0010:[<ffffffff81078bce>] [<ffffffff81078bce>] find_busiest_group+0x28e/0x8a0
> [ 346.642010] RSP: 0018:ffff8801ac577760 EFLAGS: 00010006
> [ 346.642010] RAX: 00000000000003ff RBX: 0000000000000000 RCX: 00000000ffff8801
> [ 346.642010] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
> [ 346.642010] RBP: ffff8801ac577890 R08: 0000000000000000 R09: 0000000000000000
> [ 346.704010] ------------[ cut here ]------------
> [ 346.704017] WARNING: CPU: 2 PID: 5 at arch/x86/kernel/irq_64.c:70 handle_irq+0x141/0x150()
> [ 346.704019] do_IRQ(): has overflown the kernel stack (cur:1,sp:ffff8801b653fe88,irq stk top-bottom:ffff8801bed00080-ffff8801bed03fc0,exception stk top-bottom:ffff8801bed04080-ffff8801bed0a000)
weird, have not seen this before. Though I was hitting a reboot issue
that would give really strange crash messages that was possibly fixed by
a patch that went into 3.17-rc7.
> Log 3, arm64 lockup
> ---->8----
> Seeding random number generator with 1411488270
> /proc/sys/kernel/perf_event_max_sample_rate currently: 285518974/s
> /proc/sys/kernel/perf_event_paranoid currently: 1142898651
Those last two lines are suspect. Is my fuzzer broken on arm64 somehow?
I do try to test on arm occasionally, but my pandaboard suffered massive
SD card failure and I haven't had a chance to get it back running yet.
I have trouble getting any of my other flock of arm machines running
recent upstream kernels.
Sorry that I don't have good answers for these bugs, but I will stick them
in my perf_fuzzer outstanding bugs list.
Vince
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64
2014-10-05 5:13 ` Vince Weaver
@ 2014-10-06 9:59 ` Mark Rutland
2014-10-06 14:34 ` Mark Rutland
0 siblings, 1 reply; 5+ messages in thread
From: Mark Rutland @ 2014-10-06 9:59 UTC (permalink / raw)
To: Vince Weaver
Cc: linux-kernel@vger.kernel.org, Will Deacon, Peter Zijlstra,
Paul Mackerras, Ingo Molnar, Arnaldo Carvalho de Melo
On Sun, Oct 05, 2014 at 06:13:24AM +0100, Vince Weaver wrote:
> On Thu, 25 Sep 2014, Mark Rutland wrote:
>
> > Log 1, x86_64 lockup
> > [ 223.007005] [<ffffffff81168910>] ? poll_select_copy_remaining+0x130/0x130
> > [ 223.007005] [<ffffffff811600ea>] ? getname_flags+0x4a/0x1a0
> > [ 223.007005] [<ffffffff8116007d>] ? final_putname+0x1d/0x40
> > [ 223.007005] [<ffffffff811602f4>] ? putname+0x24/0x40
> > [ 223.007005] [<ffffffff8116581a>] ? user_path_at_empty+0x5a/0x90
> > [ 223.007005] [<ffffffff810701c0>] ? wake_up_state+0x10/0x10
> > [ 223.007005] [<ffffffff81198078>] ? eventfd_read+0x38/0x60
> > [ 223.007005] [<ffffffff810a1e75>] ? ktime_get_ts64+0x45/0xf0
> > [ 223.007005] [<ffffffff81169f00>] SyS_poll+0x60/0xf0
>
> I have seen issues similar to this before, where the problem appeared
> to be in poll/hrtimer. Never managed to track down anything useful about
> the bug.
Ok.
> > Log 2, x86_64 stack overflow
>
> > [ 346.641345] divide error: 0000 [#1] SMP
> > [ 346.642010] Modules linked in:
> > [ 346.642010] CPU: 0 PID: 4076 Comm: perf_fuzzer Not tainted 3.17.0-rc6hark-perf-lockup+ #1
> > [ 346.642010] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
> > [ 346.642010] task: ffff8801ac449a70 ti: ffff8801ac574000 task.ti: ffff8801ac574000
> > [ 346.642010] RIP: 0010:[<ffffffff81078bce>] [<ffffffff81078bce>] find_busiest_group+0x28e/0x8a0
> > [ 346.642010] RSP: 0018:ffff8801ac577760 EFLAGS: 00010006
> > [ 346.642010] RAX: 00000000000003ff RBX: 0000000000000000 RCX: 00000000ffff8801
> > [ 346.642010] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
> > [ 346.642010] RBP: ffff8801ac577890 R08: 0000000000000000 R09: 0000000000000000
> > [ 346.704010] ------------[ cut here ]------------
> > [ 346.704017] WARNING: CPU: 2 PID: 5 at arch/x86/kernel/irq_64.c:70 handle_irq+0x141/0x150()
> > [ 346.704019] do_IRQ(): has overflown the kernel stack (cur:1,sp:ffff8801b653fe88,irq stk top-bottom:ffff8801bed00080-ffff8801bed03fc0,exception stk top-bottom:ffff8801bed04080-ffff8801bed0a000)
>
> weird, have not seen this before. Though I was hitting a reboot issue
> that would give really strange crash messages that was possibly fixed by
> a patch that went into 3.17-rc7.
Interesting. I'll retry with v3.17.
> > Log 3, arm64 lockup
> > ---->8----
>
> > Seeding random number generator with 1411488270
> > /proc/sys/kernel/perf_event_max_sample_rate currently: 285518974/s
> > /proc/sys/kernel/perf_event_paranoid currently: 1142898651
>
> Those last two lines are suspect. Is my fuzzer broken on arm64 somehow?
Good point. I'd mainly paid attention to the stack dump and hadn't
noticed. I'll take a look shortly and see what's going on.
> Sorry that I don't have good answers for these bugs, but I will stick them
> in my perf_fuzzer outstanding bugs list.
Cheers anyhow. I'll see if I can figure out anything further.
Mark.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64
2014-10-06 9:59 ` Mark Rutland
@ 2014-10-06 14:34 ` Mark Rutland
2014-10-07 5:00 ` Vince Weaver
0 siblings, 1 reply; 5+ messages in thread
From: Mark Rutland @ 2014-10-06 14:34 UTC (permalink / raw)
To: Vince Weaver
Cc: linux-kernel@vger.kernel.org, Will Deacon, Peter Zijlstra,
Paul Mackerras, Ingo Molnar, Arnaldo Carvalho de Melo
> > > Log 2, x86_64 stack overflow
> >
> > > [ 346.641345] divide error: 0000 [#1] SMP
> > > [ 346.642010] Modules linked in:
> > > [ 346.642010] CPU: 0 PID: 4076 Comm: perf_fuzzer Not tainted 3.17.0-rc6hark-perf-lockup+ #1
> > > [ 346.642010] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
> > > [ 346.642010] task: ffff8801ac449a70 ti: ffff8801ac574000 task.ti: ffff8801ac574000
> > > [ 346.642010] RIP: 0010:[<ffffffff81078bce>] [<ffffffff81078bce>] find_busiest_group+0x28e/0x8a0
> > > [ 346.642010] RSP: 0018:ffff8801ac577760 EFLAGS: 00010006
> > > [ 346.642010] RAX: 00000000000003ff RBX: 0000000000000000 RCX: 00000000ffff8801
> > > [ 346.642010] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
> > > [ 346.642010] RBP: ffff8801ac577890 R08: 0000000000000000 R09: 0000000000000000
> > > [ 346.704010] ------------[ cut here ]------------
> > > [ 346.704017] WARNING: CPU: 2 PID: 5 at arch/x86/kernel/irq_64.c:70 handle_irq+0x141/0x150()
> > > [ 346.704019] do_IRQ(): has overflown the kernel stack (cur:1,sp:ffff8801b653fe88,irq stk top-bottom:ffff8801bed00080-ffff8801bed03fc0,exception stk top-bottom:ffff8801bed04080-ffff8801bed0a000)
> >
> > weird, have not seen this before. Though I was hitting a reboot issue
> > that would give really strange crash messages that was possibly fixed by
> > a patch that went into 3.17-rc7.
>
> Interesting. I'll retry with v3.17.
So far I haven't been able to trigger the above failure on v3.17, so perhaps
some patch has fixed that.
With the same seed (1411654897) I can trigger a hw_breakpoint warning
relatively repeatably (logs for a couple of instances below).
Mark.
---->8----
[ 3268.694056] ------------[ cut here ]------------
[ 3268.694066] WARNING: CPU: 0 PID: 19671 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0xf0/0x100()
[ 3268.694068] Can't find any breakpoint slot
[ 3268.694070] Modules linked in:
[ 3268.694075] CPU: 0 PID: 19671 Comm: perf_fuzzer Not tainted 3.17.0hark-lockup2-2014-10-06 #4
[ 3268.694077] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
[ 3268.694079] 0000000000000009 ffff88019a343c78 ffffffff8182da5c ffff88019a343cc0
[ 3268.694084] ffff88019a343cb0 ffffffff8104af38 ffff8801af5ec800 ffff8801a0faae00
[ 3268.694088] ffff8801bec16780 ffff8801bec16784 000001b8c35e3e18 ffff88019a343d10
[ 3268.694092] Call Trace:
[ 3268.694098] [<ffffffff8182da5c>] dump_stack+0x45/0x56
[ 3268.694103] [<ffffffff8104af38>] warn_slowpath_common+0x78/0xa0
[ 3268.694106] [<ffffffff8104afa7>] warn_slowpath_fmt+0x47/0x50
[ 3268.694110] [<ffffffff8100a720>] arch_install_hw_breakpoint+0xf0/0x100
[ 3268.694114] [<ffffffff8111101f>] hw_breakpoint_add+0x3f/0x50
[ 3268.694117] [<ffffffff8110b3d4>] event_sched_in.isra.80+0x84/0x1b0
[ 3268.694121] [<ffffffff8110b569>] group_sched_in+0x69/0x1e0
[ 3268.694124] [<ffffffff8110c5cb>] ? perf_event_update_userpage+0xeb/0x160
[ 3268.694129] [<ffffffff8107568d>] ? sched_clock_local+0x1d/0x80
[ 3268.694132] [<ffffffff8110b7b2>] ctx_sched_in.isra.81+0xd2/0x1a0
[ 3268.694136] [<ffffffff8110b8cf>] perf_event_sched_in.isra.84+0x4f/0x70
[ 3268.694139] [<ffffffff8110bac3>] perf_event_context_sched_in.isra.85+0x73/0xc0
[ 3268.694142] [<ffffffff8110c175>] __perf_event_task_sched_in+0x185/0x1a0
[ 3268.694147] [<ffffffff8106c832>] finish_task_switch+0xb2/0xf0
[ 3268.694151] [<ffffffff818339bf>] __schedule+0x34f/0x810
[ 3268.694154] [<ffffffff81833ea4>] schedule+0x24/0x70
[ 3268.694158] [<ffffffff818395d1>] int_careful+0xd/0x14
[ 3268.694160] ---[ end trace e1f62407a7d7e846 ]---
---->8----
---->8----
[ 4016.924076] ------------[ cut here ]------------
[ 4016.925039] WARNING: CPU: 1 PID: 14091 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0xf0/0x100()
[ 4016.925039] Can't find any breakpoint slot
[ 4016.925039] Modules linked in:
[ 4016.925039] CPU: 1 PID: 14091 Comm: perf_fuzzer Not tainted 3.17.0hark-lockup2-2014-10-06 #4
[ 4016.925039] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
[ 4016.925039] 0000000000000009 ffff8800cd85f9a8 ffffffff8182da5c ffff8800cd85f9f0
[ 4016.925039] ffff8800cd85f9e0 ffffffff8104af38 ffff8800e38bd000 ffff8801a6fb1c00
[ 4016.925039] ffff8801bec96780 ffff8801bec96784 0000027400d480df ffff8800cd85fa40
[ 4016.925039] Call Trace:
[ 4016.925039] [<ffffffff8182da5c>] dump_stack+0x45/0x56
[ 4016.925039] [<ffffffff8104af38>] warn_slowpath_common+0x78/0xa0
[ 4016.925039] [<ffffffff8104afa7>] warn_slowpath_fmt+0x47/0x50
[ 4016.925039] [<ffffffff8100a720>] arch_install_hw_breakpoint+0xf0/0x100
[ 4016.925039] [<ffffffff8111101f>] hw_breakpoint_add+0x3f/0x50
[ 4016.925039] [<ffffffff8110b3d4>] event_sched_in.isra.80+0x84/0x1b0
[ 4016.925039] [<ffffffff8110b569>] group_sched_in+0x69/0x1e0
[ 4016.925039] [<ffffffff8110c5cb>] ? perf_event_update_userpage+0xeb/0x160
[ 4016.925039] [<ffffffff8107568d>] ? sched_clock_local+0x1d/0x80
[ 4016.925039] [<ffffffff8110b7b2>] ctx_sched_in.isra.81+0xd2/0x1a0
[ 4016.925039] [<ffffffff8110b8cf>] perf_event_sched_in.isra.84+0x4f/0x70
[ 4016.925039] [<ffffffff8110bac3>] perf_event_context_sched_in.isra.85+0x73/0xc0
[ 4016.925039] [<ffffffff8110c175>] __perf_event_task_sched_in+0x185/0x1a0
[ 4016.925039] [<ffffffff8106c832>] finish_task_switch+0xb2/0xf0
[ 4016.925039] [<ffffffff818339bf>] __schedule+0x34f/0x810
[ 4016.925039] [<ffffffff81833ea4>] schedule+0x24/0x70
[ 4016.925039] [<ffffffff81837ca9>] schedule_timeout+0x1b9/0x290
[ 4016.925039] [<ffffffff81835113>] ? wait_for_completion+0x23/0x100
[ 4016.925039] [<ffffffff8183518c>] wait_for_completion+0x9c/0x100
[ 4016.925039] [<ffffffff81072800>] ? wake_up_state+0x10/0x10
[ 4016.925039] [<ffffffff8109e7f0>] ? call_rcu_bh+0x20/0x20
[ 4016.925039] [<ffffffff8109c766>] wait_rcu_gp+0x46/0x50
[ 4016.925039] [<ffffffff8109c710>] ? ftrace_raw_output_rcu_utilization+0x50/0x50
[ 4016.925039] [<ffffffff8109ee93>] synchronize_sched+0x33/0x50
[ 4016.925039] [<ffffffff810f9fab>] perf_trace_event_unreg.isra.1+0x3b/0x90
[ 4016.925039] [<ffffffff810fa318>] perf_trace_destroy+0x38/0x50
[ 4016.925039] [<ffffffff81106d99>] tp_perf_event_destroy+0x9/0x10
[ 4016.925039] [<ffffffff81108523>] __free_event+0x23/0x70
[ 4016.925039] [<ffffffff8110f057>] SYSC_perf_event_open+0x397/0xa50
[ 4016.925039] [<ffffffff8110fad9>] SyS_perf_event_open+0x9/0x10
[ 4016.925039] [<ffffffff81839550>] tracesys+0xdd/0xe2
[ 4016.925039] ---[ end trace a2fe478e9cb5649b ]---
---->8----
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64
2014-10-06 14:34 ` Mark Rutland
@ 2014-10-07 5:00 ` Vince Weaver
0 siblings, 0 replies; 5+ messages in thread
From: Vince Weaver @ 2014-10-07 5:00 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-kernel@vger.kernel.org, Will Deacon, Peter Zijlstra,
Paul Mackerras, Ingo Molnar, Arnaldo Carvalho de Melo,
Frederic Weisbecker
On Mon, 6 Oct 2014, Mark Rutland wrote:
> So far I haven't been able to trigger the above failure on v3.17, so perhaps
> some patch has fixed that.
>
> With the same seed (1411654897) I can trigger a hw_breakpoint warning
> relatively repeatably (logs for a couple of instances below).
I see those fairly frequently too. Adding some breakpoint people to
the CC and maybe they can help debug it.
Vince
> ---->8----
> [ 3268.694056] ------------[ cut here ]------------
> [ 3268.694066] WARNING: CPU: 0 PID: 19671 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0xf0/0x100()
> [ 3268.694068] Can't find any breakpoint slot
> [ 3268.694070] Modules linked in:
> [ 3268.694075] CPU: 0 PID: 19671 Comm: perf_fuzzer Not tainted 3.17.0hark-lockup2-2014-10-06 #4
> [ 3268.694077] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
> [ 3268.694079] 0000000000000009 ffff88019a343c78 ffffffff8182da5c ffff88019a343cc0
> [ 3268.694084] ffff88019a343cb0 ffffffff8104af38 ffff8801af5ec800 ffff8801a0faae00
> [ 3268.694088] ffff8801bec16780 ffff8801bec16784 000001b8c35e3e18 ffff88019a343d10
> [ 3268.694092] Call Trace:
> [ 3268.694098] [<ffffffff8182da5c>] dump_stack+0x45/0x56
> [ 3268.694103] [<ffffffff8104af38>] warn_slowpath_common+0x78/0xa0
> [ 3268.694106] [<ffffffff8104afa7>] warn_slowpath_fmt+0x47/0x50
> [ 3268.694110] [<ffffffff8100a720>] arch_install_hw_breakpoint+0xf0/0x100
> [ 3268.694114] [<ffffffff8111101f>] hw_breakpoint_add+0x3f/0x50
> [ 3268.694117] [<ffffffff8110b3d4>] event_sched_in.isra.80+0x84/0x1b0
> [ 3268.694121] [<ffffffff8110b569>] group_sched_in+0x69/0x1e0
> [ 3268.694124] [<ffffffff8110c5cb>] ? perf_event_update_userpage+0xeb/0x160
> [ 3268.694129] [<ffffffff8107568d>] ? sched_clock_local+0x1d/0x80
> [ 3268.694132] [<ffffffff8110b7b2>] ctx_sched_in.isra.81+0xd2/0x1a0
> [ 3268.694136] [<ffffffff8110b8cf>] perf_event_sched_in.isra.84+0x4f/0x70
> [ 3268.694139] [<ffffffff8110bac3>] perf_event_context_sched_in.isra.85+0x73/0xc0
> [ 3268.694142] [<ffffffff8110c175>] __perf_event_task_sched_in+0x185/0x1a0
> [ 3268.694147] [<ffffffff8106c832>] finish_task_switch+0xb2/0xf0
> [ 3268.694151] [<ffffffff818339bf>] __schedule+0x34f/0x810
> [ 3268.694154] [<ffffffff81833ea4>] schedule+0x24/0x70
> [ 3268.694158] [<ffffffff818395d1>] int_careful+0xd/0x14
> [ 3268.694160] ---[ end trace e1f62407a7d7e846 ]---
> ---->8----
>
> ---->8----
> [ 4016.924076] ------------[ cut here ]------------
> [ 4016.925039] WARNING: CPU: 1 PID: 14091 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0xf0/0x100()
> [ 4016.925039] Can't find any breakpoint slot
> [ 4016.925039] Modules linked in:
> [ 4016.925039] CPU: 1 PID: 14091 Comm: perf_fuzzer Not tainted 3.17.0hark-lockup2-2014-10-06 #4
> [ 4016.925039] Hardware name: LENOVO 7484A3G/LENOVO, BIOS 5CKT54AUS 09/07/2009
> [ 4016.925039] 0000000000000009 ffff8800cd85f9a8 ffffffff8182da5c ffff8800cd85f9f0
> [ 4016.925039] ffff8800cd85f9e0 ffffffff8104af38 ffff8800e38bd000 ffff8801a6fb1c00
> [ 4016.925039] ffff8801bec96780 ffff8801bec96784 0000027400d480df ffff8800cd85fa40
> [ 4016.925039] Call Trace:
> [ 4016.925039] [<ffffffff8182da5c>] dump_stack+0x45/0x56
> [ 4016.925039] [<ffffffff8104af38>] warn_slowpath_common+0x78/0xa0
> [ 4016.925039] [<ffffffff8104afa7>] warn_slowpath_fmt+0x47/0x50
> [ 4016.925039] [<ffffffff8100a720>] arch_install_hw_breakpoint+0xf0/0x100
> [ 4016.925039] [<ffffffff8111101f>] hw_breakpoint_add+0x3f/0x50
> [ 4016.925039] [<ffffffff8110b3d4>] event_sched_in.isra.80+0x84/0x1b0
> [ 4016.925039] [<ffffffff8110b569>] group_sched_in+0x69/0x1e0
> [ 4016.925039] [<ffffffff8110c5cb>] ? perf_event_update_userpage+0xeb/0x160
> [ 4016.925039] [<ffffffff8107568d>] ? sched_clock_local+0x1d/0x80
> [ 4016.925039] [<ffffffff8110b7b2>] ctx_sched_in.isra.81+0xd2/0x1a0
> [ 4016.925039] [<ffffffff8110b8cf>] perf_event_sched_in.isra.84+0x4f/0x70
> [ 4016.925039] [<ffffffff8110bac3>] perf_event_context_sched_in.isra.85+0x73/0xc0
> [ 4016.925039] [<ffffffff8110c175>] __perf_event_task_sched_in+0x185/0x1a0
> [ 4016.925039] [<ffffffff8106c832>] finish_task_switch+0xb2/0xf0
> [ 4016.925039] [<ffffffff818339bf>] __schedule+0x34f/0x810
> [ 4016.925039] [<ffffffff81833ea4>] schedule+0x24/0x70
> [ 4016.925039] [<ffffffff81837ca9>] schedule_timeout+0x1b9/0x290
> [ 4016.925039] [<ffffffff81835113>] ? wait_for_completion+0x23/0x100
> [ 4016.925039] [<ffffffff8183518c>] wait_for_completion+0x9c/0x100
> [ 4016.925039] [<ffffffff81072800>] ? wake_up_state+0x10/0x10
> [ 4016.925039] [<ffffffff8109e7f0>] ? call_rcu_bh+0x20/0x20
> [ 4016.925039] [<ffffffff8109c766>] wait_rcu_gp+0x46/0x50
> [ 4016.925039] [<ffffffff8109c710>] ? ftrace_raw_output_rcu_utilization+0x50/0x50
> [ 4016.925039] [<ffffffff8109ee93>] synchronize_sched+0x33/0x50
> [ 4016.925039] [<ffffffff810f9fab>] perf_trace_event_unreg.isra.1+0x3b/0x90
> [ 4016.925039] [<ffffffff810fa318>] perf_trace_destroy+0x38/0x50
> [ 4016.925039] [<ffffffff81106d99>] tp_perf_event_destroy+0x9/0x10
> [ 4016.925039] [<ffffffff81108523>] __free_event+0x23/0x70
> [ 4016.925039] [<ffffffff8110f057>] SYSC_perf_event_open+0x397/0xa50
> [ 4016.925039] [<ffffffff8110fad9>] SyS_perf_event_open+0x9/0x10
> [ 4016.925039] [<ffffffff81839550>] tracesys+0xdd/0xe2
> [ 4016.925039] ---[ end trace a2fe478e9cb5649b ]---
> ---->8----
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-10-07 4:54 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-25 15:28 Perf lockups / stack overflows on v3.17-rc6, x86_64, arm, arm64 Mark Rutland
2014-10-05 5:13 ` Vince Weaver
2014-10-06 9:59 ` Mark Rutland
2014-10-06 14:34 ` Mark Rutland
2014-10-07 5:00 ` Vince Weaver
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox