linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* tty ldisc lockups in linux-next
@ 2012-09-25  8:49 Sasha Levin
  2012-09-25  8:52 ` Jiri Slaby
  0 siblings, 1 reply; 5+ messages in thread
From: Sasha Levin @ 2012-09-25  8:49 UTC (permalink / raw)
  To: Alan Cox, Greg Kroah-Hartman
  Cc: Dave Jones, linux-kernel@vger.kernel.org, Jiri Slaby

Hi all,

While fuzzing with trinity in a KVM tools guest running linux-next kernel, I keep hitting the following lockup:

[  842.780242] INFO: task init:1 blocked for more than 120 seconds.
[  842.780732] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  842.781559] init            D ffff88000d5b0000  3344     1      0 0x00000002
[  842.783226]  ffff88000d5adc28 0000000000000082 ffff88000d5adbe8 ffffffff81150ac5
[  842.784714]  ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8
[  842.785737]  ffffffff84e2e420 ffff88000d5b0000 ffff88000d5b08f0 7fffffffffffffff
[  842.786764] Call Trace:
[  842.787102]  [<ffffffff81150ac5>] ? sched_clock_local+0x25/0xa0
[  842.787858]  [<ffffffff83a0be45>] schedule+0x55/0x60
[  842.788511]  [<ffffffff83a09dd5>] schedule_timeout+0x45/0x360
[  842.789251]  [<ffffffff83a0d54d>] ? _raw_spin_unlock_irqrestore+0x5d/0xb0
[  842.790149]  [<ffffffff8117b13d>] ? trace_hardirqs_on+0xd/0x10
[  842.790594]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
[  842.791096]  [<ffffffff81137af7>] ? prepare_to_wait+0x77/0x90
[  842.791535]  [<ffffffff81b9b2c6>] tty_ldisc_wait_idle.isra.7+0x76/0xb0
[  842.792016]  [<ffffffff81137cd0>] ? abort_exclusive_wait+0xb0/0xb0
[  842.792490]  [<ffffffff81b9c03b>] tty_ldisc_hangup+0x1cb/0x320
[  842.792924]  [<ffffffff81b933a2>] ? __tty_hangup+0x122/0x430
[  842.793364]  [<ffffffff81b933aa>] __tty_hangup+0x12a/0x430
[  842.794077]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
[  842.794942]  [<ffffffff81b955cc>] disassociate_ctty+0x6c/0x230
[  842.795693]  [<ffffffff8110e7e8>] do_exit+0x3d8/0xa90
[  842.796361]  [<ffffffff83a0e4d9>] ? retint_swapgs+0x13/0x1b
[  842.797079]  [<ffffffff8110ef64>] do_group_exit+0x84/0xd0
[  842.797818]  [<ffffffff8110efc2>] sys_exit_group+0x12/0x20
[  842.798524]  [<ffffffff83a0edcd>] system_call_fastpath+0x1a/0x1f
[  842.799294] 1 lock held by init/1:
[  842.799734]  #0:  (&tty->ldisc_mutex){+.+.+.}, at: [<ffffffff81b9bf92>] tty_ldisc_hangup+0x122/0x320


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: tty ldisc lockups in linux-next
  2012-09-25  8:49 tty ldisc lockups in linux-next Sasha Levin
@ 2012-09-25  8:52 ` Jiri Slaby
  2012-09-25  8:55   ` Sasha Levin
  0 siblings, 1 reply; 5+ messages in thread
From: Jiri Slaby @ 2012-09-25  8:52 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Alan Cox, Greg Kroah-Hartman, Dave Jones,
	linux-kernel@vger.kernel.org

On 09/25/2012 10:49 AM, Sasha Levin wrote:
> Hi all,
> 
> While fuzzing with trinity in a KVM tools guest running linux-next kernel, I keep hitting the following lockup:

Hi, I'm confused here. Is this different to what you reported a couple
days ago? Doesn't reverting aa3c8af86382 help in the end?

> [  842.780242] INFO: task init:1 blocked for more than 120 seconds.
> [  842.780732] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  842.781559] init            D ffff88000d5b0000  3344     1      0 0x00000002
> [  842.783226]  ffff88000d5adc28 0000000000000082 ffff88000d5adbe8 ffffffff81150ac5
> [  842.784714]  ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8
> [  842.785737]  ffffffff84e2e420 ffff88000d5b0000 ffff88000d5b08f0 7fffffffffffffff
> [  842.786764] Call Trace:
> [  842.787102]  [<ffffffff81150ac5>] ? sched_clock_local+0x25/0xa0
> [  842.787858]  [<ffffffff83a0be45>] schedule+0x55/0x60
> [  842.788511]  [<ffffffff83a09dd5>] schedule_timeout+0x45/0x360
> [  842.789251]  [<ffffffff83a0d54d>] ? _raw_spin_unlock_irqrestore+0x5d/0xb0
> [  842.790149]  [<ffffffff8117b13d>] ? trace_hardirqs_on+0xd/0x10
> [  842.790594]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
> [  842.791096]  [<ffffffff81137af7>] ? prepare_to_wait+0x77/0x90
> [  842.791535]  [<ffffffff81b9b2c6>] tty_ldisc_wait_idle.isra.7+0x76/0xb0
> [  842.792016]  [<ffffffff81137cd0>] ? abort_exclusive_wait+0xb0/0xb0
> [  842.792490]  [<ffffffff81b9c03b>] tty_ldisc_hangup+0x1cb/0x320
> [  842.792924]  [<ffffffff81b933a2>] ? __tty_hangup+0x122/0x430
> [  842.793364]  [<ffffffff81b933aa>] __tty_hangup+0x12a/0x430
> [  842.794077]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
> [  842.794942]  [<ffffffff81b955cc>] disassociate_ctty+0x6c/0x230
> [  842.795693]  [<ffffffff8110e7e8>] do_exit+0x3d8/0xa90
> [  842.796361]  [<ffffffff83a0e4d9>] ? retint_swapgs+0x13/0x1b
> [  842.797079]  [<ffffffff8110ef64>] do_group_exit+0x84/0xd0
> [  842.797818]  [<ffffffff8110efc2>] sys_exit_group+0x12/0x20
> [  842.798524]  [<ffffffff83a0edcd>] system_call_fastpath+0x1a/0x1f
> [  842.799294] 1 lock held by init/1:
> [  842.799734]  #0:  (&tty->ldisc_mutex){+.+.+.}, at: [<ffffffff81b9bf92>] tty_ldisc_hangup+0x122/0x320



-- 
js
suse labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: tty ldisc lockups in linux-next
  2012-09-25  8:52 ` Jiri Slaby
@ 2012-09-25  8:55   ` Sasha Levin
  2012-09-25  8:56     ` Jiri Slaby
  0 siblings, 1 reply; 5+ messages in thread
From: Sasha Levin @ 2012-09-25  8:55 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Sasha Levin, Alan Cox, Greg Kroah-Hartman, Dave Jones,
	linux-kernel@vger.kernel.org

On 09/25/2012 10:52 AM, Jiri Slaby wrote:
>> Hi all,
>> > 
>> > While fuzzing with trinity in a KVM tools guest running linux-next kernel, I keep hitting the following lockup:
> Hi, I'm confused here. Is this different to what you reported a couple
> days ago? Doesn't reverting aa3c8af86382 help in the end?

I was just about to send a reply to that mail saying that while reverting aa3c8af86382 reduces the odds for seeing it, it still
happens. You were faster than me :)

But yes, it still happens even if I revert aa3c8af86382 or try applying your patch in that thread.


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: tty ldisc lockups in linux-next
  2012-09-25  8:55   ` Sasha Levin
@ 2012-09-25  8:56     ` Jiri Slaby
  2012-09-25  9:47       ` Sasha Levin
  0 siblings, 1 reply; 5+ messages in thread
From: Jiri Slaby @ 2012-09-25  8:56 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Sasha Levin, Alan Cox, Greg Kroah-Hartman, Dave Jones,
	linux-kernel@vger.kernel.org

On 09/25/2012 10:55 AM, Sasha Levin wrote:
> On 09/25/2012 10:52 AM, Jiri Slaby wrote:
>>> Hi all,
>>>>
>>>> While fuzzing with trinity in a KVM tools guest running linux-next kernel, I keep hitting the following lockup:
>> Hi, I'm confused here. Is this different to what you reported a couple
>> days ago? Doesn't reverting aa3c8af86382 help in the end?
> 
> I was just about to send a reply to that mail saying that while reverting aa3c8af86382 reduces the odds for seeing it, it still
> happens. You were faster than me :)
> 
> But yes, it still happens even if I revert aa3c8af86382 or try applying your patch in that thread.

The patch won't help, it's kind of certain.

Instead I still wonder what process sits on the terminal. Could you
investigate?

-- 
js
suse labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: tty ldisc lockups in linux-next
  2012-09-25  8:56     ` Jiri Slaby
@ 2012-09-25  9:47       ` Sasha Levin
  0 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2012-09-25  9:47 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Sasha Levin, Alan Cox, Greg Kroah-Hartman, Dave Jones,
	linux-kernel@vger.kernel.org

On 09/25/2012 10:56 AM, Jiri Slaby wrote:
> On 09/25/2012 10:55 AM, Sasha Levin wrote:
>> On 09/25/2012 10:52 AM, Jiri Slaby wrote:
>>>> Hi all,
>>>>>
>>>>> While fuzzing with trinity in a KVM tools guest running linux-next kernel, I keep hitting the following lockup:
>>> Hi, I'm confused here. Is this different to what you reported a couple
>>> days ago? Doesn't reverting aa3c8af86382 help in the end?
>>
>> I was just about to send a reply to that mail saying that while reverting aa3c8af86382 reduces the odds for seeing it, it still
>> happens. You were faster than me :)
>>
>> But yes, it still happens even if I revert aa3c8af86382 or try applying your patch in that thread.
> 
> The patch won't help, it's kind of certain.
> 
> Instead I still wonder what process sits on the terminal. Could you
> investigate?
> 

It looks like sh is trying to read:

[  606.950194] sh              S 0000000000000001  4800  6260      1 0x00000000
[  606.950194]  ffff88000c0ddcc8 0000000000000082 ffffffff847baa68 0000000000000b02
[  606.950194]  ffff88000c0ddfd8 ffff88000c0ddfd8 ffff88000c0ddfd8 ffff88000c0ddfd8
[  606.950194]  ffff88000f578000 ffff88000c0bb000 ffff88000c0ddd98 ffff880040b4d000
[  606.950194] Call Trace:
[  606.950194]  [<ffffffff83a0be45>] schedule+0x55/0x60
[  606.950194]  [<ffffffff83a09dd5>] schedule_timeout+0x45/0x360
[  606.950194]  [<ffffffff83a0d54d>] ? _raw_spin_unlock_irqrestore+0x5d/0xb0
[  606.950194]  [<ffffffff8117b13d>] ? trace_hardirqs_on+0xd/0x10
[  606.950194]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
[  606.950194]  [<ffffffff81b98271>] n_tty_read+0x4c1/0x9a0
[  606.950194]  [<ffffffff83a0d54d>] ? _raw_spin_unlock_irqrestore+0x5d/0xb0
[  606.950194]  [<ffffffff8114d760>] ? try_to_wake_up+0x360/0x360
[  606.950194]  [<ffffffff81b922cf>] tty_read+0x8f/0x100
[  606.950194]  [<ffffffff8127187d>] vfs_read+0xad/0x180
[  606.950194]  [<ffffffff81271c10>] sys_read+0x50/0xa0
[  606.950194]  [<ffffffff83a0edcd>] system_call_fastpath+0x1a/0x1f

While init is trying to exit:

[  605.524940] init            D ffff88000d5b0000  3376     1      0 0x00000002
[  605.527502]  ffff88000d5adc28 0000000000000082 ffff88000d5adbe8 ffffffff81150ac5
[  605.529685]  ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8 ffff88000d5adfd8
[  605.530939]  ffff88000d613000 ffff88000d5b0000 ffff88000d5b08f0 7fffffffffffffff
[  605.532064] Call Trace:
[  605.532064]  [<ffffffff81150ac5>] ? sched_clock_local+0x25/0xa0
[  605.532064]  [<ffffffff83a0be45>] schedule+0x55/0x60
[  605.532064]  [<ffffffff83a09dd5>] schedule_timeout+0x45/0x360
[  605.532064]  [<ffffffff83a0d54d>] ? _raw_spin_unlock_irqrestore+0x5d/0xb0
[  605.532064]  [<ffffffff8117b13d>] ? trace_hardirqs_on+0xd/0x10
[  605.532064]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
[  605.532064]  [<ffffffff81137af7>] ? prepare_to_wait+0x77/0x90
[  605.532064]  [<ffffffff81b9b2c6>] tty_ldisc_wait_idle.isra.7+0x76/0xb0
[  605.532064]  [<ffffffff81137cd0>] ? abort_exclusive_wait+0xb0/0xb0
[  605.532064]  [<ffffffff81b9c03b>] tty_ldisc_hangup+0x1cb/0x320
[  605.532064]  [<ffffffff81b933a2>] ? __tty_hangup+0x122/0x430
[  605.532064]  [<ffffffff81b933aa>] __tty_hangup+0x12a/0x430
[  605.532064]  [<ffffffff83a0d574>] ? _raw_spin_unlock_irqrestore+0x84/0xb0
[  605.532064]  [<ffffffff81b955cc>] disassociate_ctty+0x6c/0x230
[  605.532064]  [<ffffffff8110e7e8>] do_exit+0x3d8/0xa90
[  605.532064]  [<ffffffff83a0e4d9>] ? retint_swapgs+0x13/0x1b
[  605.532064]  [<ffffffff8110ef64>] do_group_exit+0x84/0xd0
[  605.532064]  [<ffffffff8110efc2>] sys_exit_group+0x12/0x20
[  605.532064]  [<ffffffff83a0edcd>] system_call_fastpath+0x1a/0x1f

And the corresponding lock info:

[  606.950194] Showing all locks held in the system:
[  606.950194] 1 lock held by init/1:
[  606.950194]  #0:  (&tty->ldisc_mutex){+.+.+.}, at: [<ffffffff81b9bf92>] tty_ldisc_hangup+0x122/0x320
[  606.950194] 1 lock held by sh/6260:
[  606.950194]  #0:  (&tty->atomic_read_lock){+.+...}, at: [<ffffffff81b98078>] n_tty_read+0x2c8/0x9a0


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-09-25  9:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-25  8:49 tty ldisc lockups in linux-next Sasha Levin
2012-09-25  8:52 ` Jiri Slaby
2012-09-25  8:55   ` Sasha Levin
2012-09-25  8:56     ` Jiri Slaby
2012-09-25  9:47       ` Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).