linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* yama: lockdep warning on yama_ptracer_del
@ 2012-10-18  3:10 Sasha Levin
  2012-10-18 22:39 ` Kees Cook
  0 siblings, 1 reply; 6+ messages in thread
From: Sasha Levin @ 2012-10-18  3:10 UTC (permalink / raw)
  To: james.l.morris, keescook, John Johansen, Thomas Gleixner
  Cc: linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

Hi all,

I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.

If that's actually noise, the solution would be to get the timer code to assign meaningful
names for it's timer mutexes, right?

[  954.666095]
[  954.666471] ======================================================
[  954.668233] [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
[  954.670194] 3.7.0-rc1-next-20121017-sasha-00002-g2353878-dirty #54 Tainted: G        W
[  954.672344] ------------------------------------------------------
[  954.674123] trinity-child34/8145 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
[  954.674123]  (ptracer_relations_lock){+.....}, at: [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]
[  954.674123] and this task is already holding:
[  954.674123]  (&(&new_timer->it_lock)->rlock){-.-...}, at: [<ffffffff81138340>] exit_itimers+0x50/0x160
[  954.674123] which would create a new lock dependency:
[  954.674123]  (&(&new_timer->it_lock)->rlock){-.-...} -> (ptracer_relations_lock){+.....}
[  954.674123]
[  954.674123] but this new dependency connects a HARDIRQ-irq-safe lock:
[  954.674123]  (&(&new_timer->it_lock)->rlock){-.-...}
... which became HARDIRQ-irq-safe at:
[  954.674123]   [<ffffffff8117f814>] __lock_acquire+0x864/0x1ca0
[  954.674123]   [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]   [<ffffffff83a67e5c>] _raw_spin_lock_irqsave+0x7c/0xc0
[  954.674123]   [<ffffffff811377d2>] posix_timer_fn+0x32/0xd0
[  954.674123]   [<ffffffff8113df49>] __run_hrtimer+0x279/0x4d0
[  954.674123]   [<ffffffff8113eff9>] hrtimer_interrupt+0x109/0x210
[  954.674123]   [<ffffffff810981d5>] smp_apic_timer_interrupt+0x85/0xa0
[  954.674123]   [<ffffffff83a6a6b2>] apic_timer_interrupt+0x72/0x80
[  954.674123]   [<ffffffff81077e55>] default_idle+0x235/0x5b0
[  954.674123]   [<ffffffff81079a38>] cpu_idle+0x138/0x160
[  954.674123]   [<ffffffff839fe5fe>] start_secondary+0x26e/0x276
[  954.674123]
[  954.674123] to a HARDIRQ-irq-unsafe lock:
[  954.674123]  (ptracer_relations_lock){+.....}
... which became HARDIRQ-irq-unsafe at:
[  954.674123] ...  [<ffffffff8117f8df>] __lock_acquire+0x92f/0x1ca0
[  954.674123]   [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]   [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]   [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]   [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]   [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]   [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]   [<ffffffff8110e6b8>] delayed_put_task_struct+0x118/0x180
[  954.674123]   [<ffffffff811c9739>] rcu_do_batch.isra.14+0x5a9/0xab0
[  954.674123]   [<ffffffff811c9e5a>] rcu_cpu_kthread+0x21a/0x630
[  954.674123]   [<ffffffff81144440>] smpboot_thread_fn+0x2b0/0x2e0
[  954.674123]   [<ffffffff81138b63>] kthread+0xe3/0xf0
[  954.674123]   [<ffffffff83a698bc>] ret_from_fork+0x7c/0xb0
[  954.674123]
[  954.674123] other info that might help us debug this:
[  954.674123]
[  954.674123]  Possible interrupt unsafe locking scenario:
[  954.674123]
[  954.674123]        CPU0                    CPU1
[  954.674123]        ----                    ----
[  954.674123]   lock(ptracer_relations_lock);
[  954.674123]                                local_irq_disable();
[  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
[  954.674123]                                lock(ptracer_relations_lock);
[  954.674123]   <Interrupt>
[  954.674123]     lock(&(&new_timer->it_lock)->rlock);
[  954.674123]
[  954.674123]  *** DEADLOCK ***
[  954.674123]
[  954.674123] 1 lock held by trinity-child34/8145:
[  954.674123]  #0:  (&(&new_timer->it_lock)->rlock){-.-...}, at: [<ffffffff81138340>] exit_itimers+0x50/0x160
[  954.674123]
the dependencies between HARDIRQ-irq-safe lock and the holding lock:
[  954.674123] -> (&(&new_timer->it_lock)->rlock){-.-...} ops: 3138190 {
[  954.674123]    IN-HARDIRQ-W at:
[  954.674123]                     [<ffffffff8117f814>] __lock_acquire+0x864/0x1ca0
[  954.674123]                     [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]                     [<ffffffff83a67e5c>] _raw_spin_lock_irqsave+0x7c/0xc0
[  954.674123]                     [<ffffffff811377d2>] posix_timer_fn+0x32/0xd0
[  954.674123]                     [<ffffffff8113df49>] __run_hrtimer+0x279/0x4d0
[  954.674123]                     [<ffffffff8113eff9>] hrtimer_interrupt+0x109/0x210
[  954.674123]                     [<ffffffff810981d5>] smp_apic_timer_interrupt+0x85/0xa0
[  954.674123]                     [<ffffffff83a6a6b2>] apic_timer_interrupt+0x72/0x80
[  954.674123]                     [<ffffffff81077e55>] default_idle+0x235/0x5b0
[  954.674123]                     [<ffffffff81079a38>] cpu_idle+0x138/0x160
[  954.674123]                     [<ffffffff839fe5fe>] start_secondary+0x26e/0x276
[  954.674123]    IN-SOFTIRQ-W at:
[  954.674123]                     [<ffffffff8117f849>] __lock_acquire+0x899/0x1ca0
[  954.674123]                     [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]                     [<ffffffff83a67e5c>] _raw_spin_lock_irqsave+0x7c/0xc0
[  954.674123]                     [<ffffffff811377d2>] posix_timer_fn+0x32/0xd0
[  954.674123]                     [<ffffffff8113df49>] __run_hrtimer+0x279/0x4d0
[  954.674123]                     [<ffffffff8113eff9>] hrtimer_interrupt+0x109/0x210
[  954.674123]                     [<ffffffff8113f13a>] __hrtimer_peek_ahead_timers+0x3a/0x50
[  954.674123]                     [<ffffffff8113f191>] hrtimer_peek_ahead_timers+0x41/0xa0
[  954.674123]                     [<ffffffff8113f227>] run_hrtimer_softirq+0x37/0x40
[  954.674123]                     [<ffffffff81113897>] __do_softirq+0x1c7/0x440
[  954.674123]                     [<ffffffff81113b48>] run_ksoftirqd+0x38/0xa0
[  954.674123]                     [<ffffffff81144440>] smpboot_thread_fn+0x2b0/0x2e0
[  954.674123]                     [<ffffffff81138b63>] kthread+0xe3/0xf0
[  954.674123]                     [<ffffffff83a698bc>] ret_from_fork+0x7c/0xb0
[  954.674123]    INITIAL USE at:
[  954.674123]                    [<ffffffff8117f997>] __lock_acquire+0x9e7/0x1ca0
[  954.674123]                    [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]                    [<ffffffff83a67e5c>] _raw_spin_lock_irqsave+0x7c/0xc0
[  954.674123]                    [<ffffffff811374a6>] __lock_timer+0xa6/0x1a0
[  954.674123]                    [<ffffffff81137e37>] sys_timer_gettime+0x17/0x100
[  954.674123]                    [<ffffffff83a69b98>] tracesys+0xe1/0xe6
[  954.674123]  }
[  954.674123]  ... key      at: [<ffffffff85d6b940>] __key.30461+0x0/0x8
[  954.674123]  ... acquired at:
[  954.674123]    [<ffffffff8117d4ca>] check_irq_usage+0x6a/0xe0
[  954.674123]    [<ffffffff811804ba>] __lock_acquire+0x150a/0x1ca0
[  954.674123]    [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]    [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]    [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]    [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]    [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]    [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]    [<ffffffff8113b6b7>] posix_cpu_timer_del+0xa7/0xe0
[  954.674123]    [<ffffffff81138435>] exit_itimers+0x145/0x160
[  954.674123]    [<ffffffff8111055a>] do_exit+0x1aa/0xbd0
[  954.674123]    [<ffffffff81111044>] do_group_exit+0x84/0xd0
[  954.674123]    [<ffffffff811110a2>] sys_exit_group+0x12/0x20
[  954.674123]    [<ffffffff83a69b98>] tracesys+0xe1/0xe6
[  954.674123]
[  954.674123]
the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock:
[  954.674123] -> (ptracer_relations_lock){+.....} ops: 8538 {
[  954.674123]    HARDIRQ-ON-W at:
[  954.674123]                     [<ffffffff8117f8df>] __lock_acquire+0x92f/0x1ca0
[  954.674123]                     [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]                     [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]                     [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]                     [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]                     [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]                     [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]                     [<ffffffff8110e6b8>] delayed_put_task_struct+0x118/0x180
[  954.674123]                     [<ffffffff811c9739>] rcu_do_batch.isra.14+0x5a9/0xab0
[  954.674123]                     [<ffffffff811c9e5a>] rcu_cpu_kthread+0x21a/0x630
[  954.674123]                     [<ffffffff81144440>] smpboot_thread_fn+0x2b0/0x2e0
[  954.674123]                     [<ffffffff81138b63>] kthread+0xe3/0xf0
[  954.674123]                     [<ffffffff83a698bc>] ret_from_fork+0x7c/0xb0
[  954.674123]    INITIAL USE at:
[  954.674123]                    [<ffffffff8117f997>] __lock_acquire+0x9e7/0x1ca0
[  954.674123]                    [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]                    [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]                    [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]                    [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]                    [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]                    [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]                    [<ffffffff8110e6b8>] delayed_put_task_struct+0x118/0x180
[  954.674123]                    [<ffffffff811c9739>] rcu_do_batch.isra.14+0x5a9/0xab0
[  954.674123]                    [<ffffffff811c9e5a>] rcu_cpu_kthread+0x21a/0x630
[  954.674123]                    [<ffffffff81144440>] smpboot_thread_fn+0x2b0/0x2e0
[  954.674123]                    [<ffffffff81138b63>] kthread+0xe3/0xf0
[  954.674123]                    [<ffffffff83a698bc>] ret_from_fork+0x7c/0xb0
[  954.674123]  }
[  954.674123]  ... key      at: [<ffffffff854ebcb8>] ptracer_relations_lock+0x18/0x50
[  954.674123]  ... acquired at:
[  954.674123]    [<ffffffff8117d4ca>] check_irq_usage+0x6a/0xe0
[  954.674123]    [<ffffffff811804ba>] __lock_acquire+0x150a/0x1ca0
[  954.674123]    [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]    [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]    [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]    [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]    [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]    [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]    [<ffffffff8113b6b7>] posix_cpu_timer_del+0xa7/0xe0
[  954.674123]    [<ffffffff81138435>] exit_itimers+0x145/0x160
[  954.674123]    [<ffffffff8111055a>] do_exit+0x1aa/0xbd0
[  954.674123]    [<ffffffff81111044>] do_group_exit+0x84/0xd0
[  954.674123]    [<ffffffff811110a2>] sys_exit_group+0x12/0x20
[  954.674123]    [<ffffffff83a69b98>] tracesys+0xe1/0xe6
[  954.674123]
[  954.674123]
[  954.674123] stack backtrace:
[  954.674123] Pid: 8145, comm: trinity-child34 Tainted: G        W    3.7.0-rc1-next-20121017-sasha-00002-g2353878-dirty #54
[  954.674123] Call Trace:
[  954.674123]  [<ffffffff8117d43c>] check_usage+0x49c/0x4c0
[  954.674123]  [<ffffffff8117d4ca>] check_irq_usage+0x6a/0xe0
[  954.674123]  [<ffffffff811804ba>] __lock_acquire+0x150a/0x1ca0
[  954.674123]  [<ffffffff810a4e39>] ? pvclock_clocksource_read+0x69/0x100
[  954.674123]  [<ffffffff81180b00>] ? __lock_acquire+0x1b50/0x1ca0
[  954.674123]  [<ffffffff8118324a>] lock_acquire+0x1aa/0x240
[  954.674123]  [<ffffffff8196409d>] ? yama_ptracer_del+0x1d/0xa0
[  954.674123]  [<ffffffff81076bf5>] ? sched_clock+0x15/0x20
[  954.674123]  [<ffffffff83a67f70>] _raw_spin_lock_bh+0x40/0x80
[  954.674123]  [<ffffffff8196409d>] ? yama_ptracer_del+0x1d/0xa0
[  954.674123]  [<ffffffff8117b2e2>] ? get_lock_stats+0x22/0x70
[  954.674123]  [<ffffffff8196409d>] yama_ptracer_del+0x1d/0xa0
[  954.674123]  [<ffffffff819644ec>] yama_task_free+0xc/0x10
[  954.674123]  [<ffffffff81923b41>] security_task_free+0x11/0x30
[  954.674123]  [<ffffffff81106ab8>] __put_task_struct+0x68/0x110
[  954.674123]  [<ffffffff8113b6b7>] posix_cpu_timer_del+0xa7/0xe0
[  954.674123]  [<ffffffff81138435>] exit_itimers+0x145/0x160
[  954.674123]  [<ffffffff8111055a>] do_exit+0x1aa/0xbd0
[  954.674123]  [<ffffffff811cae05>] ? rcu_user_exit+0xc5/0xf0
[  954.674123]  [<ffffffff8117de7d>] ? trace_hardirqs_on+0xd/0x10
[  954.674123]  [<ffffffff81111044>] do_group_exit+0x84/0xd0
[  954.674123]  [<ffffffff811110a2>] sys_exit_group+0x12/0x20
[  954.674123]  [<ffffffff83a69b98>] tracesys+0xe1/0xe6

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: yama: lockdep warning on yama_ptracer_del
  2012-10-18  3:10 yama: lockdep warning on yama_ptracer_del Sasha Levin
@ 2012-10-18 22:39 ` Kees Cook
  2012-11-19  4:05   ` Sasha Levin
  0 siblings, 1 reply; 6+ messages in thread
From: Kees Cook @ 2012-10-18 22:39 UTC (permalink / raw)
  To: Sasha Levin
  Cc: james.l.morris, John Johansen, Thomas Gleixner,
	linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

On Wed, Oct 17, 2012 at 8:10 PM, Sasha Levin <sasha.levin@oracle.com> wrote:
> Hi all,
>
> I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
> following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.
> ...
> [  954.674123]  Possible interrupt unsafe locking scenario:
> [  954.674123]
> [  954.674123]        CPU0                    CPU1
> [  954.674123]        ----                    ----
> [  954.674123]   lock(ptracer_relations_lock);
> [  954.674123]                                local_irq_disable();
> [  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
> [  954.674123]                                lock(ptracer_relations_lock);
> [  954.674123]   <Interrupt>
> [  954.674123]     lock(&(&new_timer->it_lock)->rlock);
> [  954.674123]
> [  954.674123]  *** DEADLOCK ***

I've been wanting to get rid of the Yama ptracer_relations_lock
anyway, so maybe I should do that now just to avoid this case at all?

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: yama: lockdep warning on yama_ptracer_del
  2012-10-18 22:39 ` Kees Cook
@ 2012-11-19  4:05   ` Sasha Levin
  2012-11-19 16:23     ` Kees Cook
  0 siblings, 1 reply; 6+ messages in thread
From: Sasha Levin @ 2012-11-19  4:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: Sasha Levin, james.l.morris, John Johansen, Thomas Gleixner,
	linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

Hi Kees,

On Thu, Oct 18, 2012 at 6:39 PM, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Oct 17, 2012 at 8:10 PM, Sasha Levin <sasha.levin@oracle.com> wrote:
>> Hi all,
>>
>> I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
>> following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.
>> ...
>> [  954.674123]  Possible interrupt unsafe locking scenario:
>> [  954.674123]
>> [  954.674123]        CPU0                    CPU1
>> [  954.674123]        ----                    ----
>> [  954.674123]   lock(ptracer_relations_lock);
>> [  954.674123]                                local_irq_disable();
>> [  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
>> [  954.674123]                                lock(ptracer_relations_lock);
>> [  954.674123]   <Interrupt>
>> [  954.674123]     lock(&(&new_timer->it_lock)->rlock);
>> [  954.674123]
>> [  954.674123]  *** DEADLOCK ***
>
> I've been wanting to get rid of the Yama ptracer_relations_lock
> anyway, so maybe I should do that now just to avoid this case at all?

I still see this one in -rc6, is there anything to get rid of it
before the release?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: yama: lockdep warning on yama_ptracer_del
  2012-11-19  4:05   ` Sasha Levin
@ 2012-11-19 16:23     ` Kees Cook
  2012-11-19 18:59       ` Sasha Levin
  0 siblings, 1 reply; 6+ messages in thread
From: Kees Cook @ 2012-11-19 16:23 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Sasha Levin, james.l.morris, John Johansen, Thomas Gleixner,
	linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

On Sun, Nov 18, 2012 at 8:05 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
> Hi Kees,
>
> On Thu, Oct 18, 2012 at 6:39 PM, Kees Cook <keescook@chromium.org> wrote:
>> On Wed, Oct 17, 2012 at 8:10 PM, Sasha Levin <sasha.levin@oracle.com> wrote:
>>> Hi all,
>>>
>>> I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
>>> following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.
>>> ...
>>> [  954.674123]  Possible interrupt unsafe locking scenario:
>>> [  954.674123]
>>> [  954.674123]        CPU0                    CPU1
>>> [  954.674123]        ----                    ----
>>> [  954.674123]   lock(ptracer_relations_lock);
>>> [  954.674123]                                local_irq_disable();
>>> [  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
>>> [  954.674123]                                lock(ptracer_relations_lock);
>>> [  954.674123]   <Interrupt>
>>> [  954.674123]     lock(&(&new_timer->it_lock)->rlock);
>>> [  954.674123]
>>> [  954.674123]  *** DEADLOCK ***
>>
>> I've been wanting to get rid of the Yama ptracer_relations_lock
>> anyway, so maybe I should do that now just to avoid this case at all?
>
> I still see this one in -rc6, is there anything to get rid of it
> before the release?

I'm not sure about changes to the timer locks, but I haven't been able
to get rid of the locking on Yama's task_free path. I did send a patch
to get rid of locking during a read, though:

https://lkml.org/lkml/2012/11/13/808

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: yama: lockdep warning on yama_ptracer_del
  2012-11-19 16:23     ` Kees Cook
@ 2012-11-19 18:59       ` Sasha Levin
  2012-11-19 22:08         ` Kees Cook
  0 siblings, 1 reply; 6+ messages in thread
From: Sasha Levin @ 2012-11-19 18:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: Sasha Levin, james.l.morris, John Johansen, Thomas Gleixner,
	linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

On 11/19/2012 11:23 AM, Kees Cook wrote:
> On Sun, Nov 18, 2012 at 8:05 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
>> Hi Kees,
>>
>> On Thu, Oct 18, 2012 at 6:39 PM, Kees Cook <keescook@chromium.org> wrote:
>>> On Wed, Oct 17, 2012 at 8:10 PM, Sasha Levin <sasha.levin@oracle.com> wrote:
>>>> Hi all,
>>>>
>>>> I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
>>>> following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.
>>>> ...
>>>> [  954.674123]  Possible interrupt unsafe locking scenario:
>>>> [  954.674123]
>>>> [  954.674123]        CPU0                    CPU1
>>>> [  954.674123]        ----                    ----
>>>> [  954.674123]   lock(ptracer_relations_lock);
>>>> [  954.674123]                                local_irq_disable();
>>>> [  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
>>>> [  954.674123]                                lock(ptracer_relations_lock);
>>>> [  954.674123]   <Interrupt>
>>>> [  954.674123]     lock(&(&new_timer->it_lock)->rlock);
>>>> [  954.674123]
>>>> [  954.674123]  *** DEADLOCK ***
>>>
>>> I've been wanting to get rid of the Yama ptracer_relations_lock
>>> anyway, so maybe I should do that now just to avoid this case at all?
>>
>> I still see this one in -rc6, is there anything to get rid of it
>> before the release?
> 
> I'm not sure about changes to the timer locks, but I haven't been able
> to get rid of the locking on Yama's task_free path. I did send a patch
> to get rid of locking during a read, though:
> 
> https://lkml.org/lkml/2012/11/13/808

Aw, alrighty. It didn't make it to -next yet though.

I'll add the patch to my tree and test with it.


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: yama: lockdep warning on yama_ptracer_del
  2012-11-19 18:59       ` Sasha Levin
@ 2012-11-19 22:08         ` Kees Cook
  0 siblings, 0 replies; 6+ messages in thread
From: Kees Cook @ 2012-11-19 22:08 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Sasha Levin, james.l.morris, John Johansen, Thomas Gleixner,
	linux-security-module, linux-kernel@vger.kernel.org, Dave Jones

On Mon, Nov 19, 2012 at 10:59 AM, Sasha Levin <sasha.levin@oracle.com> wrote:
> On 11/19/2012 11:23 AM, Kees Cook wrote:
>> On Sun, Nov 18, 2012 at 8:05 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
>>> Hi Kees,
>>>
>>> On Thu, Oct 18, 2012 at 6:39 PM, Kees Cook <keescook@chromium.org> wrote:
>>>> On Wed, Oct 17, 2012 at 8:10 PM, Sasha Levin <sasha.levin@oracle.com> wrote:
>>>>> Hi all,
>>>>>
>>>>> I was fuzzing with trinity within a KVM tools guest (lkvm) on a linux-next kernel, and got the
>>>>> following dump which I believe to be noise due to how the timers work - but I'm not 100% sure.
>>>>> ...
>>>>> [  954.674123]  Possible interrupt unsafe locking scenario:
>>>>> [  954.674123]
>>>>> [  954.674123]        CPU0                    CPU1
>>>>> [  954.674123]        ----                    ----
>>>>> [  954.674123]   lock(ptracer_relations_lock);
>>>>> [  954.674123]                                local_irq_disable();
>>>>> [  954.674123]                                lock(&(&new_timer->it_lock)->rlock);
>>>>> [  954.674123]                                lock(ptracer_relations_lock);
>>>>> [  954.674123]   <Interrupt>
>>>>> [  954.674123]     lock(&(&new_timer->it_lock)->rlock);
>>>>> [  954.674123]
>>>>> [  954.674123]  *** DEADLOCK ***
>>>>
>>>> I've been wanting to get rid of the Yama ptracer_relations_lock
>>>> anyway, so maybe I should do that now just to avoid this case at all?
>>>
>>> I still see this one in -rc6, is there anything to get rid of it
>>> before the release?
>>
>> I'm not sure about changes to the timer locks, but I haven't been able
>> to get rid of the locking on Yama's task_free path. I did send a patch
>> to get rid of locking during a read, though:
>>
>> https://lkml.org/lkml/2012/11/13/808
>
> Aw, alrighty. It didn't make it to -next yet though.
>
> I'll add the patch to my tree and test with it.

Unfortunately, I don't think it'll help since your example showed the
delete path on both sides, which is still locked. I've been trying to
think of ways to avoid the lock here, but haven't hit on anything
satisfying.

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-11-19 22:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-18  3:10 yama: lockdep warning on yama_ptracer_del Sasha Levin
2012-10-18 22:39 ` Kees Cook
2012-11-19  4:05   ` Sasha Levin
2012-11-19 16:23     ` Kees Cook
2012-11-19 18:59       ` Sasha Levin
2012-11-19 22:08         ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).